Attention Layers Add Into Low-Dimensional Residual Subspaces
By: Junxuan Wang , Xuyang Ge , Wentao Shu and more
Potential Business Impact:
Makes AI understand things better by fixing its "dead features."
While transformer models are widely believed to operate in high-dimensional hidden spaces, we show that attention outputs are confined to a surprisingly low-dimensional subspace, where about 60\% of the directions account for 99\% of the variance--a phenomenon that is induced by the attention output projection matrix and consistently observed across diverse model families and datasets. Critically, we find this low-rank structure as a fundamental cause of the prevalent dead feature problem in sparse dictionary learning, where it creates a mismatch between randomly initialized features and the intrinsic geometry of the activation space. Building on this insight, we propose a subspace-constrained training method for sparse autoencoders (SAEs), initializing feature directions into the active subspace of activations. Our approach reduces dead features from 87\% to below 1\% in Attention Output SAEs with 1M features, and can further extend to other sparse dictionary learning methods. Our findings provide both new insights into the geometry of attention and practical tools for improving sparse dictionary learning in large language models.
Similar Papers
Dense SAE Latents Are Features, Not Bugs
Machine Learning (CS)
Helps understand how computers "think" about words.
Features Emerge as Discrete States: The First Application of SAEs to 3D Representations
Machine Learning (CS)
Helps computers understand 3D shapes better.
Making Every Head Count: Sparse Attention Without the Speed-Performance Trade-off
Machine Learning (CS)
Makes AI understand long texts much faster.