Score: 0

Attention Layers Add Into Low-Dimensional Residual Subspaces

Published: August 23, 2025 | arXiv ID: 2508.16929v1

By: Junxuan Wang , Xuyang Ge , Wentao Shu and more

Potential Business Impact:

Makes AI understand things better by fixing its "dead features."

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

While transformer models are widely believed to operate in high-dimensional hidden spaces, we show that attention outputs are confined to a surprisingly low-dimensional subspace, where about 60\% of the directions account for 99\% of the variance--a phenomenon that is induced by the attention output projection matrix and consistently observed across diverse model families and datasets. Critically, we find this low-rank structure as a fundamental cause of the prevalent dead feature problem in sparse dictionary learning, where it creates a mismatch between randomly initialized features and the intrinsic geometry of the activation space. Building on this insight, we propose a subspace-constrained training method for sparse autoencoders (SAEs), initializing feature directions into the active subspace of activations. Our approach reduces dead features from 87\% to below 1\% in Attention Output SAEs with 1M features, and can further extend to other sparse dictionary learning methods. Our findings provide both new insights into the geometry of attention and practical tools for improving sparse dictionary learning in large language models.

Country of Origin
🇨🇳 China

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)