Alada: Alternating Adaptation of Momentum Method for Memory-Efficient Matrix Optimization
By: Xiaoyu He , Yu Cai , Jin Jia and more
This work proposes Alada, an adaptive momentum method for stochastic optimization over large-scale matrices. Alada employs a rank-one factorization approach to estimate the second moment of gradients, where factors are updated alternatively to minimize the estimation error. Alada achieves sublinear memory overheads and can be readily extended to optimizing tensor-shaped variables.We also equip Alada with a first moment estimation rule, which enhances the algorithm's robustness without incurring additional memory overheads. The theoretical performance of Alada aligns with that of traditional methods such as Adam. Numerical studies conducted on several natural language processing tasks demonstrate the reduction in memory overheads and the robustness in training large models relative to Adam and its variants.
Similar Papers
AdaGrad Meets Muon: Adaptive Stepsizes for Orthogonal Updates
Machine Learning (CS)
Makes computer learning faster and better.
AdaPM: a Partial Momentum Algorithm for LLM Training
Machine Learning (CS)
Saves computer memory when teaching AI.
Adaptive Memory Momentum via a Model-Based Framework for Deep Learning Optimization
Machine Learning (CS)
Makes computer learning faster by changing memory.