How to inject knowledge efficiently? Knowledge Infusion Scaling Law for Pre-training Large Language Models
By: Kangtao Lv , Haibin Chen , Yujin Yuan and more
Potential Business Impact:
Teaches AI new things without forgetting old ones.
Large language models (LLMs) have attracted significant attention due to their impressive general capabilities across diverse downstream tasks. However, without domain-specific optimization, they often underperform on specialized knowledge benchmarks and even produce hallucination. Recent studies show that strategically infusing domain knowledge during pretraining can substantially improve downstream performance. A critical challenge lies in balancing this infusion trade-off: injecting too little domain-specific data yields insufficient specialization, whereas excessive infusion triggers catastrophic forgetting of previously acquired knowledge. In this work, we focus on the phenomenon of memory collapse induced by over-infusion. Through systematic experiments, we make two key observations, i.e. 1) Critical collapse point: each model exhibits a threshold beyond which its knowledge retention capabilities sharply degrade. 2) Scale correlation: these collapse points scale consistently with the model's size. Building on these insights, we propose a knowledge infusion scaling law that predicts the optimal amount of domain knowledge to inject into large LLMs by analyzing their smaller counterparts. Extensive experiments across different model sizes and pertaining token budgets validate both the effectiveness and generalizability of our scaling law.
Similar Papers
Comparing Knowledge Injection Methods for LLMs in a Low-Resource Regime
Computation and Language
Teaches computers new facts without forgetting old ones.
Scaling Laws for Data-Efficient Visual Transfer Learning
Machine Learning (CS)
Teaches AI to learn better with less data.
Do Larger Language Models Imply Better Generalization? A Pretraining Scaling Law for Implicit Reasoning
Artificial Intelligence
Makes AI better at solving puzzles with lots of steps.