Constraining the outputs of ReLU neural networks
By: Yulia Alexandr, Guido Montúfar
Potential Business Impact:
Maps how computer brains learn to think.
We introduce a class of algebraic varieties naturally associated with ReLU neural networks, arising from the piecewise linear structure of their outputs across activation regions in input space, and the piecewise multilinear structure in parameter space. By analyzing the rank constraints on the network outputs within each activation region, we derive polynomial equations that characterize the functions representable by the network. We further investigate conditions under which these varieties attain their expected dimension, providing insight into the expressive and structural properties of ReLU networks.
Similar Papers
Toric geometry of ReLU neural networks
Algebraic Geometry
Maps math shapes to computer learning.
The Geometry of ReLU Networks through the ReLU Transition Graph
Machine Learning (CS)
Maps how computer brains learn to work better.
Discrete Functional Geometry of ReLU Networks via ReLU Transition Graphs
Machine Learning (CS)
Helps computers learn better by understanding their "thinking."