Curriculum Learning for LLM Pretraining: An Analysis of Learning Dynamics
By: Mohamed Elgaar, Hadi Amiri
Potential Business Impact:
Teaches computers better by changing learning order.
Curriculum learning changes the order of pre-training data, but it remains unclear whether it changes the learning trajectory or mainly reorders exposure over a fixed trajectory. We train Pythia models (14M-410M parameters) for 300B tokens under three linguistically motivated curricula-Age-of-Acquisition, word frequency, and Verb Variation (VV)-and compare each against Random ordering; at 1B parameters we compare Random and VV. Across orderings, training follows a shared sequence of latent phases, while curricula mainly change within-phase data exposure. In smaller models (up to 160M parameters), Random ordering exhibits higher gradient noise and stronger late-training output-head spectral saturation, alongside lower final accuracy; curricula reduce both effects at matched compute. At larger scales, saturation differences are smaller and curriculum gains shrink. We formalize the link between difficulty pacing and optimization stability in an idealized analysis based on gradient-variance control, and our results point to a practical takeaway: curricula help by stabilizing within-phase optimization rather than by creating new phases.
Similar Papers
Influence-driven Curriculum Learning for Pre-training on Limited Data
Computation and Language
Teaches computers to learn faster by sorting lessons.
What Makes a Good Curriculum? Disentangling the Effects of Data Ordering on LLM Mathematical Reasoning
Machine Learning (CS)
Teaches computers math better by sorting problems.
Beyond Random Sampling: Efficient Language Model Pretraining via Curriculum Learning
Computation and Language
Teaches computers to learn faster and better.