Score: 0

Output Embedding Centering for Stable LLM Pretraining

Published: January 5, 2026 | arXiv ID: 2601.02031v1

By: Felix Stollenwerk, Anna Lokrantz, Niclas Hertzberg

Potential Business Impact:

Fixes computer "brain" training to work better.

Business Areas:
MOOC Education, Software

Pretraining of large language models is not only expensive but also prone to certain training instabilities. A specific instability that often occurs for large learning rates at the end of training is output logit divergence. The most widely used mitigation strategy, z-loss, merely addresses the symptoms rather than the underlying cause of the problem. In this paper, we analyze the instability from the perspective of the output embeddings' geometry and identify its cause. Based on this, we propose output embedding centering (OEC) as a new mitigation strategy, and prove that it suppresses output logit divergence. OEC can be implemented in two different ways, as a deterministic operation called μ-centering, or a regularization method called μ-loss. Our experiments show that both variants outperform z-loss in terms of training stability and learning rate sensitivity. In particular, they ensure that training converges even for large learning rates when z-loss fails. Furthermore, we find that μ-loss is significantly less sensitive to regularization hyperparameter tuning than z-loss.

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)