Score: 0

Training Diagonal Linear Networks with Stochastic Sharpness-Aware Minimization

Published: March 14, 2025 | arXiv ID: 2503.11891v1

By: Gabriel Clara, Sophie Langer, Johannes Schmidt-Hieber

Potential Business Impact:

Makes computer learning faster and more accurate.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

We analyze the landscape and training dynamics of diagonal linear networks in a linear regression task, with the network parameters being perturbed by small isotropic normal noise. The addition of such noise may be interpreted as a stochastic form of sharpness-aware minimization (SAM) and we prove several results that relate its action on the underlying landscape and training dynamics to the sharpness of the loss. In particular, the noise changes the expected gradient to force balancing of the weight matrices at a fast rate along the descent trajectory. In the diagonal linear model, we show that this equates to minimizing the average sharpness, as well as the trace of the Hessian matrix, among all possible factorizations of the same matrix. Further, the noise forces the gradient descent iterates towards a shrinkage-thresholding of the underlying true parameter, with the noise level explicitly regulating both the shrinkage factor and the threshold.

Country of Origin
🇳🇱 Netherlands

Page Count
54 pages

Category
Computer Science:
Machine Learning (CS)