Transient learning dynamics drive escape from sharp valleys in Stochastic Gradient Descent
By: Ning Yang , Yikuan Zhang , Qi Ouyang and more
Potential Business Impact:
Makes AI learn better by finding smoother paths.
Stochastic gradient descent (SGD) is central to deep learning, yet the dynamical origin of its preference for flatter, more generalizable solutions remains unclear. Here, by analyzing SGD learning dynamics, we identify a nonequilibrium mechanism governing solution selection. Numerical experiments reveal a transient exploratory phase in which SGD trajectories repeatedly escape sharp valleys and transition toward flatter regions of the loss landscape. By using a tractable physical model, we show that the SGD noise reshapes the landscape into an effective potential that favors flat solutions. Crucially, we uncover a transient freezing mechanism: as training proceeds, growing energy barriers suppress inter-valley transitions and ultimately trap the dynamics within a single basin. Increasing the SGD noise strength delays this freezing, which enhances convergence to flatter minima. Together, these results provide a unified physical framework linking learning dynamics, loss-landscape geometry, and generalization, and suggest principles for the design of more effective optimization algorithms.
Similar Papers
Convergence, Sticking and Escape: Stochastic Dynamics Near Critical Points in SGD
Machine Learning (CS)
Helps computers find the best answer faster.
Phase diagram and eigenvalue dynamics of stochastic gradient descent in multilayer neural networks
Disordered Systems and Neural Networks
Helps computers learn better by finding the best settings.
A Bootstrap Perspective on Stochastic Gradient Descent
Machine Learning (CS)
Makes computer learning better by using random guesses.