Score: 0

Critical attention scaling in long-context transformers

Published: October 7, 2025 | arXiv ID: 2510.05554v1

By: Shi Chen , Zhengjiang Lin , Yury Polyanskiy and more

Potential Business Impact:

Makes AI understand longer stories better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As large language models scale to longer contexts, attention layers suffer from a fundamental pathology: attention scores collapse toward uniformity as context length $n$ increases, causing tokens to cluster excessively, a phenomenon known as rank-collapse. While $\textit{attention scaling}$ effectively addresses this deficiency by rescaling attention scores with a polylogarithmic factor $\beta_n$, theoretical justification for this approach remains lacking. We analyze a simplified yet tractable model that magnifies the effect of attention scaling. In this model, attention exhibits a phase transition governed by the scaling factor $\beta_n$: insufficient scaling collapses all tokens to a single direction, while excessive scaling reduces attention to identity, thereby eliminating meaningful interactions between tokens. Our main result identifies the critical scaling $\beta_n \asymp \log n$ and provides a rigorous justification for attention scaling in YaRN and Qwen, clarifying why logarithmic scaling maintains sparse, content-adaptive attention at large context lengths.

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)