Score: 0

How Learning Rate Decay Wastes Your Best Data in Curriculum-Based LLM Pretraining

Published: November 24, 2025 | arXiv ID: 2511.18903v1

By: Kairong Luo , Zhenbo Sun , Haodong Wen and more

Potential Business Impact:

Teaches computers better using smarter data sorting.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Due to the scarcity of high-quality data, large language models (LLMs) are often trained on mixtures of data with varying quality levels, even after sophisticated data curation. A natural approach to better leverage high-quality data is curriculum-based pretraining, where the model is trained on data sorted in ascending order of quality as determined by a quality metric. However, prior studies have reported limited improvements from such curriculum-based pretraining strategies. This work identifies a critical factor constraining these methods: the incompatibility between the ascending data quality order and the decaying learning rate (LR) schedule. We find that while curriculum-based training substantially outperforms random shuffling when using a constant LR, its advantage diminishes under standard LR decay schedules. Our experiments show this incompatibility can be mitigated by two simple strategies: (1) employing a more moderate LR decay schedule, where the final LR is only moderately smaller than the peak LR, and (2) replacing LR decay with model averaging, i.e., computing a weighted average of the final few checkpoints. By combining these strategies, we improve the average score on a suite of standard benchmarks by 1.64% over random shuffling, without additional data refinement. Validated on 1.5B-parameter models trained over 30B tokens with various data-quality metrics, our findings call for a re-evaluation of curriculum-based LLM pretraining and underscore the potential of co-designing data curricula with optimization methods.

Country of Origin
🇨🇳 China

Page Count
28 pages

Category
Computer Science:
Machine Learning (CS)