Score: 0

Next-token pretraining implies in-context learning

Published: May 23, 2025 | arXiv ID: 2505.18373v2

By: Paul M. Riechers , Henry R. Bigelow , Eric A. Alt and more

Potential Business Impact:

Computers learn from examples without special training.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We argue that in-context learning (ICL) predictably arises from standard self-supervised next-token pretraining, rather than being an exotic emergent property. This work establishes the foundational principles of this emergence by focusing on in-distribution ICL, demonstrating how models necessarily adapt to context when trained on token sequences, especially from non-ergodic sources. Our information-theoretic framework precisely predicts these in-distribution ICL dynamics (i.e., context-dependent loss reduction). We verify this with experiments using synthetic datasets of differing types of correlational structure, reproducing characteristic phenomena like phase transitions in training loss for induction head formation and power-law scaling of in-context loss. We further show that a model's in-context performance on any task is mathematically coupled to the ensemble of tasks seen in pretraining, offering a fundamental explanation, grounded in architecture- and modality-independent principles, for such inference-time learning.

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)