Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling
By: Teodora Srećković, Jonas Geiping, Antonio Orvieto
Potential Business Impact:
Makes computer language learning faster and better.
Adam is known to perform significantly better than Stochastic Gradient Descent (SGD) in language models, a phenomenon for which a number of explanations have been proposed. In this work, we revisit this "optimizer gap" through a series of comprehensively tuned baseline training runs for language modeling with Transformers. We exhaustively study how momentum, gradient clipping, and batch size affect the gap between SGD and Adam. Our empirical findings show that SGD with momentum can actually perform similarly to Adam in small-batch settings, if tuned correctly. We revisit existing explanations for Adam's advantage, including heavy-tailed class imbalance, directional sharpness, and Hessian heterogeneity, which struggle to directly explain this phenomenon. Towards bridging this gap in our understanding, by analyzing our Transformer training runs and simple quadratic settings inspired by the literature, we provide new insights, driven by stochastic differential equation models, into the role of batch size on the training dynamics.
Similar Papers
Understanding the Generalization of Stochastic Gradient Adam in Learning Neural Networks
Machine Learning (CS)
Makes computer learning better with bigger data groups.
Accelerating SGDM via Learning Rate and Batch Size Schedules: A Lyapunov-Based Analysis
Machine Learning (CS)
Makes computer learning faster and more reliable.
DIVEBATCH: Accelerating Model Training Through Gradient-Diversity Aware Batch Size Adaptation
Machine Learning (CS)
Makes computer learning faster and more efficient.