Score: 0

On the Emergence of Induction Heads for In-Context Learning

Published: November 2, 2025 | arXiv ID: 2511.01033v1

By: Tiberiu Musat , Tiago Pimentel , Lorenzo Noci and more

Potential Business Impact:

Helps computers learn new things from examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformers have become the dominant architecture for natural language processing. Part of their success is owed to a remarkable capability known as in-context learning (ICL): they can acquire and apply novel associations solely from their input context, without any updates to their weights. In this work, we study the emergence of induction heads, a previously identified mechanism in two-layer transformers that is particularly important for in-context learning. We uncover a relatively simple and interpretable structure of the weight matrices implementing the induction head. We theoretically explain the origin of this structure using a minimal ICL task formulation and a modified transformer architecture. We give a formal proof that the training dynamics remain constrained to a 19-dimensional subspace of the parameter space. Empirically, we validate this constraint while observing that only 3 dimensions account for the emergence of an induction head. By further studying the training dynamics inside this 3-dimensional subspace, we find that the time until the emergence of an induction head follows a tight asymptotic bound that is quadratic in the input context length.

Country of Origin
🇨🇭 Switzerland

Page Count
37 pages

Category
Computer Science:
Artificial Intelligence