Score: 0

Emergence of Minimal Circuits for Indirect Object Identification in Attention-Only Transformers

Published: October 28, 2025 | arXiv ID: 2510.25013v1

By: Rabin Adhikari

Potential Business Impact:

Finds simple "thinking paths" inside AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Mechanistic interpretability aims to reverse-engineer large language models (LLMs) into human-understandable computational circuits. However, the complexity of pretrained models often obscures the minimal mechanisms required for specific reasoning tasks. In this work, we train small, attention-only transformers from scratch on a symbolic version of the Indirect Object Identification (IOI) task -- a benchmark for studying coreference -- like reasoning in transformers. Surprisingly, a single-layer model with only two attention heads achieves perfect IOI accuracy, despite lacking MLPs and normalization layers. Through residual stream decomposition, spectral analysis, and embedding interventions, we find that the two heads specialize into additive and contrastive subcircuits that jointly implement IOI resolution. Furthermore, we show that a two-layer, one-head model achieves similar performance by composing information across layers through query-value interactions. These results demonstrate that task-specific training induces highly interpretable, minimal circuits, offering a controlled testbed for probing the computational foundations of transformer reasoning.

Country of Origin
🇩🇪 Germany

Page Count
9 pages

Category
Computer Science:
Computation and Language