Score: 0

A Trainable Optimizer

Published: August 3, 2025 | arXiv ID: 2508.01764v1

By: Ruiqi Wang, Diego Klabjan

Potential Business Impact:

Trains AI faster with self-learning steps

The concept of learning to optimize involves utilizing a trainable optimization strategy rather than relying on manually defined full gradient estimations such as ADAM. We present a framework that jointly trains the full gradient estimator and the trainable weights of the model. Specifically, we prove that pseudo-linear TO (Trainable Optimizer), a linear approximation of the full gradient, matches SGD's convergence rate while effectively reducing variance. Pseudo-linear TO incurs negligible computational overhead, requiring only minimal additional tensor multiplications. To further improve computational efficiency, we introduce two simplified variants of Pseudo-linear TO. Experiments demonstrate that TO methods converge faster than benchmark algorithms (e.g., ADAM) in both strongly convex and non-convex settings, and fine tuning of an LLM.

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)