Universality of high-dimensional scaling limits of stochastic gradient descent
By: Reza Gheissari, Aukosh Jagannath
We consider statistical tasks in high dimensions whose loss depends on the data only through its projection into a fixed-dimensional subspace spanned by the parameter vectors and certain ground truth vectors. This includes classifying mixture distributions with cross-entropy loss with one and two-layer networks, and learning single and multi-index models with one and two-layer networks. When the data is drawn from an isotropic Gaussian mixture distribution, it is known that the evolution of a finite family of summary statistics under stochastic gradient descent converges to an autonomous ordinary differential equation (ODE), as the dimension and sample size go to $\infty$ and the step size goes to $0$ commensurately. Our main result is that these ODE limits are universal in that this convergence occurs even when the data is drawn from mixtures of product measures provided the first two moments match the corresponding Gaussian distribution and the initialization and ground truth vectors are sufficiently coordinate-delocalized. We complement this by proving two corresponding non-universality results. We provide a simple example where the ODE limits are non-universal if the initialization is coordinate aligned. We also show that the stochastic differential equation limits arising as fluctuations of the summary statistics around their ODE's fixed points are not universal.
Similar Papers
Limit Theorems for Stochastic Gradient Descent in High-Dimensional Single-Layer Networks
Machine Learning (Stat)
Makes AI learn faster by understanding noise.
Exact Dynamics of Multi-class Stochastic Gradient Descent
Machine Learning (Stat)
Helps computers learn better from messy data.
Neural Networks Learn Generic Multi-Index Models Near Information-Theoretic Limit
Machine Learning (Stat)
Teaches computers to learn hidden patterns faster.