Next-token pretraining implies in-context learning
By: Paul M. Riechers , Henry R. Bigelow , Eric A. Alt and more
Potential Business Impact:
Computers learn from examples without special training.
We argue that in-context learning (ICL) predictably arises from standard self-supervised next-token pretraining, rather than being an exotic emergent property. This work establishes the foundational principles of this emergence by focusing on in-distribution ICL, demonstrating how models necessarily adapt to context when trained on token sequences, especially from non-ergodic sources. Our information-theoretic framework precisely predicts these in-distribution ICL dynamics (i.e., context-dependent loss reduction). We verify this with experiments using synthetic datasets of differing types of correlational structure, reproducing characteristic phenomena like phase transitions in training loss for induction head formation and power-law scaling of in-context loss. We further show that a model's in-context performance on any task is mathematically coupled to the ensemble of tasks seen in pretraining, offering a fundamental explanation, grounded in architecture- and modality-independent principles, for such inference-time learning.
Similar Papers
Towards Auto-Regressive Next-Token Prediction: In-Context Learning Emerges from Generalization
Computation and Language
Explains how computers learn from examples.
When can in-context learning generalize out of task distribution?
Machine Learning (CS)
Teaches computers to learn new things from few examples.
Is In-Context Learning Learning?
Computation and Language
Computers learn new things from examples, not just memorizing.