Convergence, Sticking and Escape: Stochastic Dynamics Near Critical Points in SGD
By: Dmitry Dudukalov , Artem Logachov , Vladimir Lotov and more
Potential Business Impact:
Helps computers find the best answer faster.
We study the convergence properties and escape dynamics of Stochastic Gradient Descent (SGD) in one-dimensional landscapes, separately considering infinite- and finite-variance noise. Our main focus is to identify the time scales on which SGD reliably moves from an initial point to the local minimum in the same ''basin''. Under suitable conditions on the noise distribution, we prove that SGD converges to the basin's minimum unless the initial point lies too close to a local maximum. In that near-maximum scenario, we show that SGD can linger for a long time in its neighborhood. For initial points near a ''sharp'' maximum, we show that SGD does not remain stuck there, and we provide results to estimate the probability that it will reach each of the two neighboring minima. Overall, our findings present a nuanced view of SGD's transitions between local maxima and minima, influenced by both noise characteristics and the underlying function geometry.
Similar Papers
Transient learning dynamics drive escape from sharp valleys in Stochastic Gradient Descent
Machine Learning (CS)
Makes AI learn better by finding smoother paths.
The global convergence time of stochastic gradient descent in non-convex landscapes: Sharp estimates via large deviations
Optimization and Control
Helps computers learn faster by finding best answers.
Quantitative Convergence Analysis of Projected Stochastic Gradient Descent for Non-Convex Losses via the Goldstein Subdifferential
Optimization and Control
Makes AI learn faster without needing extra tricks.