A Trainable Optimizer
By: Ruiqi Wang, Diego Klabjan
Potential Business Impact:
Trains AI faster with self-learning steps
The concept of learning to optimize involves utilizing a trainable optimization strategy rather than relying on manually defined full gradient estimations such as ADAM. We present a framework that jointly trains the full gradient estimator and the trainable weights of the model. Specifically, we prove that pseudo-linear TO (Trainable Optimizer), a linear approximation of the full gradient, matches SGD's convergence rate while effectively reducing variance. Pseudo-linear TO incurs negligible computational overhead, requiring only minimal additional tensor multiplications. To further improve computational efficiency, we introduce two simplified variants of Pseudo-linear TO. Experiments demonstrate that TO methods converge faster than benchmark algorithms (e.g., ADAM) in both strongly convex and non-convex settings, and fine tuning of an LLM.
Similar Papers
Optimizing Optimizers for Fast Gradient-Based Learning
Machine Learning (CS)
Creates better computer learning by designing smarter math tools.
A Convexity-dependent Two-Phase Training Algorithm for Deep Neural Networks
Machine Learning (CS)
Makes computer learning faster and more accurate.
A Convexity-dependent Two-Phase Training Algorithm for Deep Neural Networks
Machine Learning (CS)
Makes computer learning faster and more accurate.