Score: 1

Beyond Components: Singular Vector-Based Interpretability of Transformer Circuits

Published: November 25, 2025 | arXiv ID: 2511.20273v1

By: Areeb Ahmad, Abhinav Joshi, Ashutosh Modi

Potential Business Impact:

Finds hidden, separate jobs inside AI's brain.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformer-based language models exhibit complex and distributed behavior, yet their internal computations remain poorly understood. Existing mechanistic interpretability methods typically treat attention heads and multilayer perceptron layers (MLPs) (the building blocks of a transformer architecture) as indivisible units, overlooking possibilities of functional substructure learned within them. In this work, we introduce a more fine-grained perspective that decomposes these components into orthogonal singular directions, revealing superposed and independent computations within a single head or MLP. We validate our perspective on widely used standard tasks like Indirect Object Identification (IOI), Gender Pronoun (GP), and Greater Than (GT), showing that previously identified canonical functional heads, such as the name mover, encode multiple overlapping subfunctions aligned with distinct singular directions. Nodes in a computational graph, that are previously identified as circuit elements show strong activation along specific low-rank directions, suggesting that meaningful computations reside in compact subspaces. While some directions remain challenging to interpret fully, our results highlight that transformer computations are more distributed, structured, and compositional than previously assumed. This perspective opens new avenues for fine-grained mechanistic interpretability and a deeper understanding of model internals.

Country of Origin
🇮🇳 India

Repos / Data Links

Page Count
33 pages

Category
Computer Science:
Machine Learning (CS)