Generalization Below the Edge of Stability: The Role of Data Geometry
By: Tongtong Liang , Alexander Cloninger , Rahul Parhi and more
Potential Business Impact:
Helps computers learn better by understanding data shapes.
Understanding generalization in overparameterized neural networks hinges on the interplay between the data geometry, neural architecture, and training dynamics. In this paper, we theoretically explore how data geometry controls this implicit bias. This paper presents theoretical results for overparameterized two-layer ReLU networks trained below the edge of stability. First, for data distributions supported on a mixture of low-dimensional balls, we derive generalization bounds that provably adapt to the intrinsic dimension. Second, for a family of isotropic distributions that vary in how strongly probability mass concentrates toward the unit sphere, we derive a spectrum of bounds showing that rates deteriorate as the mass concentrates toward the sphere. These results instantiate a unifying principle: When the data is harder to "shatter" with respect to the activation thresholds of the ReLU neurons, gradient descent tends to learn representations that capture shared patterns and thus finds solutions that generalize well. On the other hand, for data that is easily shattered (e.g., data supported on the sphere) gradient descent favors memorization. Our theoretical results consolidate disparate empirical findings that have appeared in the literature.
Similar Papers
Does Flatness imply Generalization for Logistic Loss in Univariate Two-Layer ReLU Network?
Machine Learning (CS)
Makes computer learning more reliable for some tasks.
Neural Feature Geometry Evolves as Discrete Ricci Flow
Machine Learning (CS)
Helps computers learn better by understanding shapes.
Symmetry and Generalisation in Neural Approximations of Renormalisation Transformations
Machine Learning (CS)
Makes computers learn patterns in physics better.