Revisiting the Scaling Properties of Downstream Metrics in Large Language Model Training
By: Jakub Krajewski , Amitis Shidani , Dan Busbridge and more
Potential Business Impact:
Predicts how well AI will learn new things.
While scaling laws for Large Language Models (LLMs) traditionally focus on proxy metrics like pretraining loss, predicting downstream task performance has been considered unreliable. This paper challenges that view by proposing a direct framework to model the scaling of benchmark performance from the training budget. We find that for a fixed token-to-parameter ratio, a simple power law can accurately describe the scaling behavior of log accuracy on multiple popular downstream tasks. Our results show that the direct approach extrapolates better than the previously proposed two-stage procedure, which is prone to compounding errors. Furthermore, we introduce functional forms that predict accuracy across token-to-parameter ratios and account for inference compute under repeated sampling. We validate our findings on models with up to 17B parameters trained on up to 350B tokens across two dataset mixtures. To support reproducibility and encourage future research, we release the complete set of pretraining losses and downstream evaluation results.
Similar Papers
Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions
Computation and Language
Smart computer programs learn better with smarter building.
Predicting Task Performance with Context-aware Scaling Laws
Computation and Language
Makes AI better at understanding long stories.
Relative Scaling Laws for LLMs
Computation and Language
Shows how AI gets better, but not equally.