Nonlinear discretizations and Newton's method: characterizing stationary points of regression objectives
By: Conor Rowan
Potential Business Impact:
Makes AI learn faster by using better math.
Second-order methods are emerging as promising alternatives to standard first-order optimizers such as gradient descent and ADAM for training neural networks. Though the advantages of including curvature information in computing optimization steps have been celebrated in the scientific machine learning literature, the only second-order methods that have been studied are quasi-Newton, meaning that the Hessian matrix of the objective function is approximated. Though one would expect only to gain from using the true Hessian in place of its approximation, we show that neural network training reliably fails when relying on exact curvature information. The failure modes provide insight both into the geometry of nonlinear discretizations as well as the distribution of stationary points in the loss landscape, leading us to question the conventional wisdom that the loss landscape is replete with local minima.
Similar Papers
Quasi-Newton Compatible Actor-Critic for Deterministic Policies
Machine Learning (CS)
Teaches computers to learn faster by watching mistakes.
Preconditioning Natural and Second Order Gradient Descent in Quantum Optimization: A Performance Benchmark
Computational Engineering, Finance, and Science
Makes quantum computers learn faster and better.
CAO: Curvature-Adaptive Optimization via Periodic Low-Rank Hessian Sketching
Machine Learning (CS)
Makes computer learning models train much faster.