The Geometry of ReLU Networks through the ReLU Transition Graph
By: Sahil Rajesh Dhayalkar
Potential Business Impact:
Maps how computer brains learn to work better.
We develop a novel theoretical framework for analyzing ReLU neural networks through the lens of a combinatorial object we term the ReLU Transition Graph (RTG). In this graph, each node corresponds to a linear region induced by the network's activation patterns, and edges connect regions that differ by a single neuron flip. Building on this structure, we derive a suite of new theoretical results connecting RTG geometry to expressivity, generalization, and robustness. Our contributions include tight combinatorial bounds on RTG size and diameter, a proof of RTG connectivity, and graph-theoretic interpretations of VC-dimension. We also relate entropy and average degree of the RTG to generalization error. Each theoretical result is rigorously validated via carefully controlled experiments across varied network depths, widths, and data regimes. This work provides the first unified treatment of ReLU network structure via graph theory and opens new avenues for compression, regularization, and complexity control rooted in RTG analysis.
Similar Papers
Discrete Functional Geometry of ReLU Networks via ReLU Transition Graphs
Machine Learning (CS)
Helps computers learn better by mapping their thinking.
Discrete Functional Geometry of ReLU Networks via ReLU Transition Graphs
Machine Learning (CS)
Helps computers learn better by understanding their "thinking."
Toric geometry of ReLU neural networks
Algebraic Geometry
Maps math shapes to computer learning.