On the Convergence Behavior of Preconditioned Gradient Descent Toward the Rich Learning Regime
By: Shuai Jiang , Alexey Voronin , Eric Cyr and more
Potential Business Impact:
Helps computers learn details faster and better.
Spectral bias, the tendency of neural networks to learn low frequencies first, can be both a blessing and a curse. While it enhances the generalization capabilities by suppressing high-frequency noise, it can be a limitation in scientific tasks that require capturing fine-scale structures. The delayed generalization phenomenon known as grokking is another barrier to rapid training of neural networks. Grokking has been hypothesized to arise as learning transitions from the NTK to the feature-rich regime. This paper explores the impact of preconditioned gradient descent (PGD), such as Gauss-Newton, on spectral bias and grokking phenomena. We demonstrate through theoretical and empirical results how PGD can mitigate issues associated with spectral bias. Additionally, building on the rich learning regime grokking hypothesis, we study how PGD can be used to reduce delays associated with grokking. Our conjecture is that PGD, without the impediment of spectral bias, enables uniform exploration of the parameter space in the NTK regime. Our experimental results confirm this prediction, providing strong evidence that grokking represents a transitional behavior between the lazy regime characterized by the NTK and the rich regime. These findings deepen our understanding of the interplay between optimization dynamics, spectral bias, and the phases of neural network learning.
Similar Papers
From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent
Image and Video Processing
Fixes broken pictures using smart math.
The Operator Origins of Neural Scaling Laws: A Generalized Spectral Transport Dynamics of Deep Learning
Machine Learning (CS)
Makes AI learn faster and better.
Grokking Beyond the Euclidean Norm of Model Parameters
Machine Learning (CS)
Makes AI learn better after seeming to forget.