EMA Without the Lag: Bias-Corrected Iterate Averaging Schemes
By: Adam Block, Cyril Zhang
Potential Business Impact:
Makes AI learn faster and better.
Stochasticity in language model fine-tuning, often caused by the small batch sizes typically used in this regime, can destabilize training by introducing large oscillations in generation quality. A popular approach to mitigating this instability is to take an Exponential moving average (EMA) of weights throughout training. While EMA reduces stochasticity, thereby smoothing training, the introduction of bias from old iterates often creates a lag in optimization relative to vanilla training. In this work, we propose the Bias-Corrected Exponential Moving Average (BEMA), a simple and practical augmentation of EMA that retains variance-reduction benefits while eliminating bias. BEMA is motivated by a simple theoretical model wherein we demonstrate provable acceleration of BEMA over both a standard EMA and vanilla training. Through an extensive suite of experiments on Language Models, we show that BEMA leads to significantly improved convergence rates and final performance over both EMA and vanilla training in a variety of standard LM benchmarks, making BEMA a practical and theoretically motivated intervention for more stable and efficient fine-tuning.
Similar Papers
An Exponential Averaging Process with Strong Convergence Properties
Machine Learning (Stat)
Makes computer learning more accurate with noisy data.
In-Training Defenses against Emergent Misalignment in Language Models
Machine Learning (CS)
Stops AI from learning bad habits when retrained.
Input Adaptive Bayesian Model Averaging
Machine Learning (Stat)
Combines best guesses for smarter predictions.