Low Rank Gradients and Where to Find Them
By: Rishi Sonthalia, Michael Murray, Guido Montúfar
Potential Business Impact:
Teaches computers to learn better from messy data.
This paper investigates low-rank structure in the gradients of the training loss for two-layer neural networks while relaxing the usual isotropy assumptions on the training data and parameters. We consider a spiked data model in which the bulk can be anisotropic and ill-conditioned, we do not require independent data and weight matrices and we also analyze both the mean-field and neural-tangent-kernel scalings. We show that the gradient with respect to the input weights is approximately low rank and is dominated by two rank-one terms: one aligned with the bulk data-residue , and another aligned with the rank one spike in the input data. We characterize how properties of the training data, the scaling regime and the activation function govern the balance between these two components. Additionally, we also demonstrate that standard regularizers, such as weight decay, input noise and Jacobian penalties, also selectively modulate these components. Experiments on synthetic and real data corroborate our theoretical predictions.
Similar Papers
Inductive Bias and Spectral Properties of Single-Head Attention in High Dimensions
Machine Learning (Stat)
Helps AI learn better by understanding how it works.
Risk Phase Transitions in Spiked Regression: Alignment Driven Benign and Catastrophic Overfitting
Machine Learning (Stat)
Finds when math models make wrong guesses.
An Analytical Characterization of Sloppiness in Neural Networks: Insights from Linear Models
Machine Learning (CS)
Finds simple patterns in how computer brains learn.