Perplexity-Aware Data Scaling Law: Perplexity Landscapes Predict Performance for Continual Pre-training
By: Lei Liu , Hao Zhu , Yue Shen and more
Potential Business Impact:
Finds best data to teach computers faster.
Continual Pre-training (CPT) serves as a fundamental approach for adapting foundation models to domain-specific applications. Scaling laws for pre-training define a power-law relationship between dataset size and the test loss of an LLM. However, the marginal gains from simply increasing data for CPT diminish rapidly, yielding suboptimal data utilization and inefficient training. To address this challenge, we propose a novel perplexity-aware data scaling law to establish a predictive relationship between the perplexity landscape of domain-specific data and the test loss. Our approach leverages the perplexity derived from the pre-trained model on domain data as a proxy for estimating the knowledge gap, effectively quantifying the informational perplexity landscape of candidate training samples. By fitting this scaling law across diverse perplexity regimes, we enable adaptive selection of high-utility data subsets, prioritizing content that maximizes knowledge absorption while minimizing redundancy and noise. Extensive experiments demonstrate that our method consistently identifies near-optimal training subsets and achieves superior performance on both medical and general-domain benchmarks.
Similar Papers
PTPP-Aware Adaptation Scaling Laws: Predicting Domain-Adaptation Performance at Unseen Pre-Training Budgets
Machine Learning (CS)
Helps AI learn new things without forgetting old ones.
Learning Dynamics in Continual Pre-Training for Large Language Models
Computation and Language
Predicts how well AI learns new tasks.
The Data Efficiency Frontier of Financial Foundation Models: Scaling Laws from Continued Pretraining
Machine Learning (CS)
Teaches computers to understand money talk better.