Decomposing Representation Space into Interpretable Subspaces with Unsupervised Learning
By: Xinting Huang, Michael Hahn
Potential Business Impact:
Finds hidden "folders" inside AI brains.
Understanding internal representations of neural models is a core interest of mechanistic interpretability. Due to its large dimensionality, the representation space can encode various aspects about inputs. To what extent are different aspects organized and encoded in separate subspaces? Is it possible to find these ``natural'' subspaces in a purely unsupervised way? Somewhat surprisingly, we can indeed achieve this and find interpretable subspaces by a seemingly unrelated training objective. Our method, neighbor distance minimization (NDM), learns non-basis-aligned subspaces in an unsupervised manner. Qualitative analysis shows subspaces are interpretable in many cases, and encoded information in obtained subspaces tends to share the same abstract concept across different inputs, making such subspaces similar to ``variables'' used by the model. We also conduct quantitative experiments using known circuits in GPT-2; results show a strong connection between subspaces and circuit variables. We also provide evidence showing scalability to 2B models by finding separate subspaces mediating context and parametric knowledge routing. Viewed more broadly, our findings offer a new perspective on understanding model internals and building circuits.
Similar Papers
Native Logical and Hierarchical Representations with Subspace Embeddings
Machine Learning (CS)
Computers understand words and their meanings better.
The Universal Weight Subspace Hypothesis
Machine Learning (CS)
Finds hidden patterns in AI brains.
Discovering Semantic Subdimensions through Disentangled Conceptual Representations
Computation and Language
Finds hidden meanings inside words.