Toric geometry of ReLU neural networks
By: Yaoying Fu
Potential Business Impact:
Maps math shapes to computer learning.
Given a continuous finitely piecewise linear function $f:\mathbb{R}^{n_0} \to \mathbb{R}$ and a fixed architecture $(n_0,\ldots,n_k;1)$ of feedforward ReLU neural networks, the exact function realization problem is to determine when some network with the given architecture realizes $f$. To develop a systematic way to answer these questions, we establish a connection between toric geometry and ReLU neural networks. This approach enables us to utilize numerous structures and tools from algebraic geometry to study ReLU neural networks. Starting with an unbiased ReLU neural network with rational weights, we define the ReLU fan, the ReLU toric variety, and the ReLU Cartier divisor associated with the network. This work also reveals the connection between the tropical geometry and the toric geometry of ReLU neural networks. As an application of the toric geometry framework, we prove a necessary and sufficient criterion of functions realizable by unbiased shallow ReLU neural networks by computing intersection numbers of the ReLU Cartier divisor and torus-invariant curves.
Similar Papers
Constraining the outputs of ReLU neural networks
Algebraic Geometry
Maps how computer brains learn to think.
Discrete Functional Geometry of ReLU Networks via ReLU Transition Graphs
Machine Learning (CS)
Helps computers learn better by understanding their "thinking."
The Geometry of ReLU Networks through the ReLU Transition Graph
Machine Learning (CS)
Maps how computer brains learn to work better.