Score: 1

EMA Without the Lag: Bias-Corrected Iterate Averaging Schemes

Published: July 31, 2025 | arXiv ID: 2508.00180v1

By: Adam Block, Cyril Zhang

Potential Business Impact:

Makes AI learn faster and better.

Stochasticity in language model fine-tuning, often caused by the small batch sizes typically used in this regime, can destabilize training by introducing large oscillations in generation quality. A popular approach to mitigating this instability is to take an Exponential moving average (EMA) of weights throughout training. While EMA reduces stochasticity, thereby smoothing training, the introduction of bias from old iterates often creates a lag in optimization relative to vanilla training. In this work, we propose the Bias-Corrected Exponential Moving Average (BEMA), a simple and practical augmentation of EMA that retains variance-reduction benefits while eliminating bias. BEMA is motivated by a simple theoretical model wherein we demonstrate provable acceleration of BEMA over both a standard EMA and vanilla training. Through an extensive suite of experiments on Language Models, we show that BEMA leads to significantly improved convergence rates and final performance over both EMA and vanilla training in a variety of standard LM benchmarks, making BEMA a practical and theoretically motivated intervention for more stable and efficient fine-tuning.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
45 pages

Category
Computer Science:
Machine Learning (CS)