Score: 1

Optimizing Optimizers for Fast Gradient-Based Learning

Published: December 6, 2025 | arXiv ID: 2512.06370v1

By: Jaerin Lee, Kyoung Mu Lee

Potential Business Impact:

Creates better computer learning by designing smarter math tools.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We lay the theoretical foundation for automating optimizer design in gradient-based learning. Based on the greedy principle, we formulate the problem of designing optimizers as maximizing the instantaneous decrease in loss. By treating an optimizer as a function that translates loss gradient signals into parameter motions, the problem reduces to a family of convex optimization problems over the space of optimizers. Solving these problems under various constraints not only recovers a wide range of popular optimizers as closed-form solutions, but also produces the optimal hyperparameters of these optimizers with respect to the problems at hand. This enables a systematic approach to design optimizers and tune their hyperparameters according to the gradient statistics that are collected during the training process. Furthermore, this optimization of optimization can be performed dynamically during training.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
49 pages

Category
Computer Science:
Machine Learning (CS)