Output Embedding Centering for Stable LLM Pretraining
By: Felix Stollenwerk, Anna Lokrantz, Niclas Hertzberg
Potential Business Impact:
Fixes computer "brain" training to work better.
Pretraining of large language models is not only expensive but also prone to certain training instabilities. A specific instability that often occurs for large learning rates at the end of training is output logit divergence. The most widely used mitigation strategy, z-loss, merely addresses the symptoms rather than the underlying cause of the problem. In this paper, we analyze the instability from the perspective of the output embeddings' geometry and identify its cause. Based on this, we propose output embedding centering (OEC) as a new mitigation strategy, and prove that it suppresses output logit divergence. OEC can be implemented in two different ways, as a deterministic operation called μ-centering, or a regularization method called μ-loss. Our experiments show that both variants outperform z-loss in terms of training stability and learning rate sensitivity. In particular, they ensure that training converges even for large learning rates when z-loss fails. Furthermore, we find that μ-loss is significantly less sensitive to regularization hyperparameter tuning than z-loss.
Similar Papers
Logits Replay + MoClip: Stabilized, Low-Cost Post-Training with Minimal Forgetting
Machine Learning (CS)
Keeps AI smart while teaching it new skills.
Scaling Language-Centric Omnimodal Representation Learning
Computation and Language
Makes computers understand pictures and words better.
Stabilizing Reinforcement Learning with LLMs: Formulation and Practices
Machine Learning (CS)
Makes AI learn better and faster from mistakes.