Training Diagonal Linear Networks with Stochastic Sharpness-Aware Minimization
By: Gabriel Clara, Sophie Langer, Johannes Schmidt-Hieber
Potential Business Impact:
Makes computer learning faster and more accurate.
We analyze the landscape and training dynamics of diagonal linear networks in a linear regression task, with the network parameters being perturbed by small isotropic normal noise. The addition of such noise may be interpreted as a stochastic form of sharpness-aware minimization (SAM) and we prove several results that relate its action on the underlying landscape and training dynamics to the sharpness of the loss. In particular, the noise changes the expected gradient to force balancing of the weight matrices at a fast rate along the descent trajectory. In the diagonal linear model, we show that this equates to minimizing the average sharpness, as well as the trace of the Hessian matrix, among all possible factorizations of the same matrix. Further, the noise forces the gradient descent iterates towards a shrinkage-thresholding of the underlying true parameter, with the noise level explicitly regulating both the shrinkage factor and the threshold.
Similar Papers
Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise
Machine Learning (CS)
Makes computer learning models work better.
Diagonal Linear Networks and the Lasso Regularization Path
Machine Learning (CS)
Training computers learns like a math shortcut.
Understanding Sharpness Dynamics in NN Training with a Minimalist Example: The Effects of Dataset Difficulty, Depth, Stochasticity, and More
Machine Learning (CS)
Makes computer brains learn better and faster.