Score: 1

Mid-Training of Large Language Models: A Survey

Published: October 8, 2025 | arXiv ID: 2510.06826v1

By: Kaixiang Mo , Yuxin Shi , Weiwei Weng and more

Potential Business Impact:

Improves AI learning for better understanding.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are typically developed through large-scale pre-training followed by task-specific fine-tuning. Recent advances highlight the importance of an intermediate mid-training stage, where models undergo multiple annealing-style phases that refine data quality, adapt optimization schedules, and extend context length. This stage mitigates diminishing returns from noisy tokens, stabilizes convergence, and expands model capability in late training. Its effectiveness can be explained through gradient noise scale, the information bottleneck, and curriculum learning, which together promote generalization and abstraction. Despite widespread use in state-of-the-art systems, there has been no prior survey of mid-training as a unified paradigm. We introduce the first taxonomy of LLM mid-training spanning data distribution, learning-rate scheduling, and long-context extension. We distill practical insights, compile evaluation benchmarks, and report gains to enable structured comparisons across models. We also identify open challenges and propose avenues for future research and practice.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
20 pages

Category
Computer Science:
Computation and Language