How to Set the Batch Size for Large-Scale Pre-training?
By: Yunhua Zhou , Junhao Huang , Shuhao Xin and more
Potential Business Impact:
Finds best computer learning speed for better results.
The concept of Critical Batch Size, as pioneered by OpenAI, has long served as a foundational principle for large-scale pre-training. However, with the paradigm shift towards the Warmup-Stable-Decay (WSD) learning rate scheduler, we observe that the original theoretical framework and its underlying mechanisms fail to align with new pre-training dynamics. To bridge this gap between theory and practice, this paper derives a revised E(S) relationship tailored for WSD scheduler, characterizing the trade-off between training data consumption E and steps S during pre-training. Our theoretical analysis reveals two fundamental properties of WSD-based pre-training: 1) B_min, the minimum batch size threshold required to achieve a target loss, and 2) B_opt, the optimal batch size that maximizes data efficiency by minimizing total tokens. Building upon these properties, we propose a dynamic Batch Size Scheduler. Extensive experiments demonstrate that our revised formula precisely captures the dynamics of large-scale pre-training, and the resulting scheduling strategy significantly enhances both training efficiency and final model quality.
Similar Papers
How to Set the Learning Rate for Large-Scale Pre-training?
Artificial Intelligence
Finds best computer learning speed faster.
Critical Batch Size Revisited: A Simple Empirical Approach to Large-Batch Language Model Training
Machine Learning (CS)
Trains AI faster without losing quality.
Adaptive Batch Size and Learning Rate Scheduler for Stochastic Gradient Descent Based on Minimization of Stochastic First-order Oracle Complexity
Machine Learning (CS)
Makes computer learning faster by changing settings.