Score: 0

Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling

Published: June 14, 2025 | arXiv ID: 2506.12543v1

By: Teodora Srećković, Jonas Geiping, Antonio Orvieto

Potential Business Impact:

Makes computer language learning faster and better.

Business Areas:
A/B Testing Data and Analytics

Adam is known to perform significantly better than Stochastic Gradient Descent (SGD) in language models, a phenomenon for which a number of explanations have been proposed. In this work, we revisit this "optimizer gap" through a series of comprehensively tuned baseline training runs for language modeling with Transformers. We exhaustively study how momentum, gradient clipping, and batch size affect the gap between SGD and Adam. Our empirical findings show that SGD with momentum can actually perform similarly to Adam in small-batch settings, if tuned correctly. We revisit existing explanations for Adam's advantage, including heavy-tailed class imbalance, directional sharpness, and Hessian heterogeneity, which struggle to directly explain this phenomenon. Towards bridging this gap in our understanding, by analyzing our Transformer training runs and simple quadratic settings inspired by the literature, we provide new insights, driven by stochastic differential equation models, into the role of batch size on the training dynamics.

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)