Flat Minima and Generalization: Insights from Stochastic Convex Optimization
By: Matan Schliserman, Shira Vansover-Hager, Tomer Koren
Potential Business Impact:
Makes computers learn better, even when they're wrong.
Understanding the generalization behavior of learning algorithms is a central goal of learning theory. A recently emerging explanation is that learning algorithms are successful in practice because they converge to flat minima, which have been consistently associated with improved generalization performance. In this work, we study the link between flat minima and generalization in the canonical setting of stochastic convex optimization with a non-negative, $\beta$-smooth objective. Our first finding is that, even in this fundamental and well-studied setting, flat empirical minima may incur trivial $\Omega(1)$ population risk while sharp minima generalizes optimally. Then, we show that this poor generalization behavior extends to two natural ''sharpness-aware'' algorithms originally proposed by Foret et al. (2021), designed to bias optimization toward flat solutions: Sharpness-Aware Gradient Descent (SA-GD) and Sharpness-Aware Minimization (SAM). For SA-GD, which performs gradient steps on the maximal loss in a predefined neighborhood, we prove that while it successfully converges to a flat minimum at a fast rate, the population risk of the solution can still be as large as $\Omega(1)$, indicating that even flat minima found algorithmically using a sharpness-aware gradient method might generalize poorly. For SAM, a computationally efficient approximation of SA-GD based on normalized ascent steps, we show that although it minimizes the empirical loss, it may converge to a sharp minimum and also incur population risk $\Omega(1)$. Finally, we establish population risk upper bounds for both SA-GD and SAM using algorithmic stability techniques.
Similar Papers
A Function Centric Perspective On Flat and Sharp Minima
Machine Learning (CS)
Sharpness can make AI smarter and safer.
Understanding Flatness in Generative Models: Its Role and Benefits
CV and Pattern Recognition
Makes AI art more stable and less buggy.
Sharp Minima Can Generalize: A Loss Landscape Perspective On Data
Machine Learning (CS)
More data helps computers learn better from examples.