Score: 0

Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin

Published: October 7, 2025 | arXiv ID: 2510.06477v1

By: Enrique Queipo-de-Llano , Álvaro Arroyo , Federico Barbero and more

Potential Business Impact:

Fixes how AI understands and remembers information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Attention sinks and compression valleys have attracted significant attention as two puzzling phenomena in large language models, but have been studied in isolation. In this work, we present a surprising connection between attention sinks and compression valleys, tracing both to the formation of massive activations in the residual stream. We prove theoretically that massive activations necessarily produce representational compression and establish bounds on the resulting entropy reduction. Through experiments across several models (410M-120B parameters), we confirm that when the beginning-of-sequence token develops extreme activation norms in the middle layers, both compression valleys and attention sinks emerge simultaneously. Targeted ablation studies validate our theoretical predictions. This unified view motivates us to propose the Mix-Compress-Refine theory of information flow, as an attempt to explain how LLMs organize their computation in depth by controlling attention and representational compression via massive activations. Specifically, we posit that Transformer-based LLMs process tokens in three distinct phases: (1) broad mixing in the early layers, (2) compressed computation with limited mixing in the middle layers, and (3) selective refinement in the late layers. Our framework helps explain why embedding tasks perform best at intermediate layers, whereas generation tasks benefit from full-depth processing, clarifying differences in task-dependent representations.

Country of Origin
🇬🇧 United Kingdom

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)