Gradient Methods with Online Scaling Part II. Practical Aspects
By: Ya-Chi Chu , Wenzhi Gao , Yinyu Ye and more
Potential Business Impact:
Makes computer learning faster and use less memory.
Part I of this work [Gao25] establishes online scaled gradient methods (OSGM), a framework that utilizes online convex optimization to adapt stepsizes in gradient methods. This paper focuses on the practical aspects of OSGM. We leverage the OSGM framework to design new adaptive first-order methods and provide insights into their empirical behavior. The resulting method, OSGM-Best, matches the performance of quasi-Newton variants while requiring less memory and cheaper iterations. We also extend OSGM to nonconvex optimization and outline directions that connect OSGM to existing branches of optimization theory and practice.
Similar Papers
Adaptive control mechanisms in gradient descent algorithms
Optimization and Control
Makes computer learning faster and more accurate.
Adaptive Conditional Gradient Descent
Optimization and Control
Makes computer learning faster and better.
Comparing BFGS and OGR for Second-Order Optimization
Machine Learning (CS)
Helps computers learn faster, even with tricky problems.