Enhancing Optimizer Stability: Momentum Adaptation of The NGN Step-size
By: Rustem Islamov , Niccolo Ajroldi , Antonio Orvieto and more
Potential Business Impact:
Makes computer learning work better, even with bad settings.
Modern optimization algorithms that incorporate momentum and adaptive step-size offer improved performance in numerous challenging deep learning tasks. However, their effectiveness is often highly sensitive to the choice of hyperparameters, especially the step-size. Tuning these parameters is often difficult, resource-intensive, and time-consuming. Therefore, recent efforts have been directed toward enhancing the stability of optimizers across a wide range of hyperparameter choices [Schaipp et al., 2024]. In this paper, we introduce an algorithm that matches the performance of state-of-the-art optimizers while improving stability to the choice of the step-size hyperparameter through a novel adaptation of the NGN step-size method [Orvieto and Xiao, 2024]. Specifically, we propose a momentum-based version (NGN-M) that attains the standard convergence rate of $\mathcal{O}(1/\sqrt{K})$ under less restrictive assumptions, without the need for interpolation condition or assumptions of bounded stochastic gradients or iterates, in contrast to previous approaches. Additionally, we empirically demonstrate that the combination of the NGN step-size with momentum results in enhanced robustness to the choice of the step-size hyperparameter while delivering performance that is comparable to or surpasses other state-of-the-art optimizers.
Similar Papers
Dynamically Weighted Momentum with Adaptive Step Sizes for Efficient Deep Network Training
Machine Learning (CS)
Helps computers learn faster and better.
High-dimensional limit theorems for SGD: Momentum and Adaptive Step-sizes
Machine Learning (Stat)
Improves computer learning by making it more stable.
Adaptive Stepsizing for Stochastic Gradient Langevin Dynamics in Bayesian Neural Networks
Machine Learning (CS)
Makes computer learning more accurate and stable.