Emergence of Minimal Circuits for Indirect Object Identification in Attention-Only Transformers
By: Rabin Adhikari
Potential Business Impact:
Finds simple "thinking paths" inside AI.
Mechanistic interpretability aims to reverse-engineer large language models (LLMs) into human-understandable computational circuits. However, the complexity of pretrained models often obscures the minimal mechanisms required for specific reasoning tasks. In this work, we train small, attention-only transformers from scratch on a symbolic version of the Indirect Object Identification (IOI) task -- a benchmark for studying coreference -- like reasoning in transformers. Surprisingly, a single-layer model with only two attention heads achieves perfect IOI accuracy, despite lacking MLPs and normalization layers. Through residual stream decomposition, spectral analysis, and embedding interventions, we find that the two heads specialize into additive and contrastive subcircuits that jointly implement IOI resolution. Furthermore, we show that a two-layer, one-head model achieves similar performance by composing information across layers through query-value interactions. These results demonstrate that task-specific training induces highly interpretable, minimal circuits, offering a controlled testbed for probing the computational foundations of transformer reasoning.
Similar Papers
From Indirect Object Identification to Syllogisms: Exploring Binary Mechanisms in Transformer Circuits
Computation and Language
Shows how computers understand and reason with logic.
Beyond Components: Singular Vector-Based Interpretability of Transformer Circuits
Machine Learning (CS)
Finds hidden, separate jobs inside AI's brain.
Mechanistic Interpretability of Fine-Tuned Vision Transformers on Distorted Images: Decoding Attention Head Behavior for Transparent and Trustworthy AI
Machine Learning (CS)
Helps AI understand what's important in pictures.