Comparing BFGS and OGR for Second-Order Optimization
By: Adrian Przybysz , Mikołaj Kołek , Franciszek Sobota and more
Potential Business Impact:
Helps computers learn faster, even with tricky problems.
Estimating the Hessian matrix, especially for neural network training, is a challenging problem due to high dimensionality and cost. In this work, we compare the classical Sherman-Morrison update used in the popular BFGS method (Broy-den-Fletcher-Goldfarb-Shanno), which maintains a positive definite Hessian approximation under a convexity assumption, with a novel approach called Online Gradient Regression (OGR). OGR performs regression of gradients against positions using an exponential moving average to estimate second derivatives online, without requiring Hessian inversion. Unlike BFGS, OGR allows estimation of a general (not necessarily positive definite) Hessian and can thus handle non-convex structures. We evaluate both methods across standard test functions and demonstrate that OGR achieves faster convergence and improved loss, particularly in non-convex settings.
Similar Papers
Gradient Methods with Online Scaling Part II. Practical Aspects
Optimization and Control
Makes computer learning faster and use less memory.
An hybrid stochastic Newton algorithm for logistic regression
Computation
Teaches computers to learn from data faster.
ONG: Orthogonal Natural Gradient Descent
Machine Learning (CS)
Helps computers learn new things without forgetting old ones.