Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful
By: Martin Marek , Sanae Lotfi , Aditya Somasundaram and more
Potential Business Impact:
Makes AI learn better with smaller groups.
Conventional wisdom dictates that small batch sizes make language model pretraining and fine-tuning unstable, motivating gradient accumulation, which trades off the number of optimizer steps for a proportional increase in batch size. While it is common to decrease the learning rate for smaller batch sizes, other hyperparameters are often held fixed. In this work, we revisit small batch sizes all the way down to batch size one, and we propose a rule for scaling Adam hyperparameters to small batch sizes. In particular, rather than holding the decay rate of the second moment fixed across batch sizes, we propose to hold its half-life fixed in terms of tokens. We find that small batch sizes (1) train stably, (2) are consistently more robust to hyperparameter choices, (3) achieve equal or better per-FLOP performance than larger batch sizes, and (4) notably enable stable language model training with vanilla SGD, even without momentum, despite storing no optimizer state. Building on these results, we provide practical recommendations for selecting a batch size and setting optimizer hyperparameters. We further recommend against gradient accumulation unless training on multiple devices with multiple model replicas. Finally, we show that a small batch size combined with an optimizer with a small state size can provide the performance benefits of full fine-tuning while maintaining a similar memory footprint to LoRA.
Similar Papers
Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling
Machine Learning (CS)
Makes computer language learning faster and better.
Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum
Machine Learning (CS)
Makes computer learning faster by changing data size.
Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum
Machine Learning (CS)
Makes computer learning faster by changing data size.