Beyond Random Sampling: Efficient Language Model Pretraining via Curriculum Learning
By: Yang Zhang , Amr Mohamed , Hadi Abdine and more
Potential Business Impact:
Teaches computers to learn faster and better.
Curriculum learning has shown promise in improving training efficiency and generalization in various machine learning domains, yet its potential in pretraining language models remains underexplored, prompting our work as the first systematic investigation in this area. We experimented with different settings, including vanilla curriculum learning, pacing-based sampling, and interleaved curricula-guided by six difficulty metrics spanning linguistic and information-theoretic perspectives. We train models under these settings and evaluate their performance on eight diverse benchmarks. Our experiments reveal that curriculum learning consistently improves convergence in early and mid-training phases, and can yield lasting gains when used as a warmup strategy with up to $3.5\%$ improvement. Notably, we identify compression ratio, lexical diversity, and readability as effective difficulty signals across settings. Our findings highlight the importance of data ordering in large-scale pretraining and provide actionable insights for scalable, data-efficient model development under realistic training scenarios.
Similar Papers
Influence-driven Curriculum Learning for Pre-training on Limited Data
Computation and Language
Teaches computers to learn faster by sorting lessons.
Curriculum Learning for LLM Pretraining: An Analysis of Learning Dynamics
Machine Learning (CS)
Teaches computers better by changing learning order.
Scaling LLM Pre-training with Vocabulary Curriculum
Computation and Language
Lets computers learn new words like humans.