Optimizing Optimizers for Fast Gradient-Based Learning
By: Jaerin Lee, Kyoung Mu Lee
Potential Business Impact:
Creates better computer learning by designing smarter math tools.
We lay the theoretical foundation for automating optimizer design in gradient-based learning. Based on the greedy principle, we formulate the problem of designing optimizers as maximizing the instantaneous decrease in loss. By treating an optimizer as a function that translates loss gradient signals into parameter motions, the problem reduces to a family of convex optimization problems over the space of optimizers. Solving these problems under various constraints not only recovers a wide range of popular optimizers as closed-form solutions, but also produces the optimal hyperparameters of these optimizers with respect to the problems at hand. This enables a systematic approach to design optimizers and tune their hyperparameters according to the gradient statistics that are collected during the training process. Furthermore, this optimization of optimization can be performed dynamically during training.
Similar Papers
Gradient Descent with Provably Tuned Learning-rate Schedules
Machine Learning (CS)
Teaches computers to learn better, even when tricky.
A Trainable Optimizer
Machine Learning (CS)
Trains AI faster with self-learning steps
Easing Optimization Paths: a Circuit Perspective
Machine Learning (CS)
Teaches computers to learn faster and safer.