Conflicting Biases at the Edge of Stability: Norm versus Sharpness Regularization
By: Vit Fojtik , Maria Matveev , Hung-Hsu Chou and more
Potential Business Impact:
Helps computers learn better by balancing two learning tricks.
A widely believed explanation for the remarkable generalization capacities of overparameterized neural networks is that the optimization algorithms used for training induce an implicit bias towards benign solutions. To grasp this theoretically, recent works examine gradient descent and its variants in simplified training settings, often assuming vanishing learning rates. These studies reveal various forms of implicit regularization, such as $\ell_1$-norm minimizing parameters in regression and max-margin solutions in classification. Concurrently, empirical findings show that moderate to large learning rates exceeding standard stability thresholds lead to faster, albeit oscillatory, convergence in the so-called Edge-of-Stability regime, and induce an implicit bias towards minima of low sharpness (norm of training loss Hessian). In this work, we argue that a comprehensive understanding of the generalization performance of gradient descent requires analyzing the interaction between these various forms of implicit regularization. We empirically demonstrate that the learning rate balances between low parameter norm and low sharpness of the trained model. We furthermore prove for diagonal linear networks trained on a simple regression task that neither implicit bias alone minimizes the generalization error. These findings demonstrate that focusing on a single implicit bias is insufficient to explain good generalization, and they motivate a broader view of implicit regularization that captures the dynamic trade-off between norm and sharpness induced by non-negligible learning rates.
Similar Papers
Convergence Rates for Gradient Descent on the Edge of Stability in Overparametrised Least Squares
Machine Learning (CS)
Helps computers learn faster by finding better solutions.
Linear regression with overparameterized linear neural networks: Tight upper and lower bounds for implicit $\ell^1$-regularization
Machine Learning (Stat)
Deeper AI learns better from less data.
Generalization Below the Edge of Stability: The Role of Data Geometry
Machine Learning (Stat)
Helps computers learn better by understanding data shapes.