Better LMO-based Momentum Methods with Second-Order Information
By: Sarit Khirirat , Abdurakhmon Sadiev , Yury Demidovich and more
The use of momentum in stochastic optimization algorithms has shown empirical success across a range of machine learning tasks. Recently, a new class of stochastic momentum algorithms has emerged within the Linear Minimization Oracle (LMO) framework--leading to state-of-the-art methods, such as Muon, Scion, and Gluon, that effectively solve deep neural network training problems. However, traditional stochastic momentum methods offer convergence guarantees no better than the ${O}(1/K^{1/4})$ rate. While several approaches--such as Hessian-Corrected Momentum (HCM)--have aimed to improve this rate, their theoretical results are generally restricted to the Euclidean norm setting. This limitation hinders their applicability in problems, where arbitrary norms are often required. In this paper, we extend the LMO-based framework by integrating HCM, and provide convergence guarantees under relaxed smoothness and arbitrary norm settings. We establish improved convergence rates of ${O}(1/K^{1/3})$ for HCM, which can adapt to the geometry of the problem and achieve a faster rate than traditional momentum. Experimental results on training Multi-Layer Perceptrons (MLPs) and Long Short-Term Memory (LSTM) networks verify our theoretical observations.
Similar Papers
Beyond the Ideal: Analyzing the Inexact Muon Update
Machine Learning (CS)
Makes AI learn faster and better.
Distributed Stochastic Momentum Tracking with Local Updates: Achieving Optimal Communication and Iteration Complexities
Optimization and Control
Makes computers work together faster and use less data.
Stochastic Difference-of-Convex Optimization with Momentum
Machine Learning (CS)
Makes computer learning work with smaller groups.