Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin
By: Enrique Queipo-de-Llano , Álvaro Arroyo , Federico Barbero and more
Potential Business Impact:
Fixes how AI understands and remembers information.
Attention sinks and compression valleys have attracted significant attention as two puzzling phenomena in large language models, but have been studied in isolation. In this work, we present a surprising connection between attention sinks and compression valleys, tracing both to the formation of massive activations in the residual stream. We prove theoretically that massive activations necessarily produce representational compression and establish bounds on the resulting entropy reduction. Through experiments across several models (410M-120B parameters), we confirm that when the beginning-of-sequence token develops extreme activation norms in the middle layers, both compression valleys and attention sinks emerge simultaneously. Targeted ablation studies validate our theoretical predictions. This unified view motivates us to propose the Mix-Compress-Refine theory of information flow, as an attempt to explain how LLMs organize their computation in depth by controlling attention and representational compression via massive activations. Specifically, we posit that Transformer-based LLMs process tokens in three distinct phases: (1) broad mixing in the early layers, (2) compressed computation with limited mixing in the middle layers, and (3) selective refinement in the late layers. Our framework helps explain why embedding tasks perform best at intermediate layers, whereas generation tasks benefit from full-depth processing, clarifying differences in task-dependent representations.
Similar Papers
Mitigating Attention Sinks and Massive Activations in Audio-Visual Speech Recognition with LLMs
Audio and Speech Processing
Makes computers understand talking better, even with bad sound.
Mitigating Attention Sinks and Massive Activations in Audio-Visual Speech Recognition with LLMS
Audio and Speech Processing
Makes computers understand talking better, even with bad sound.
Attention Sinks in Diffusion Language Models
Computation and Language
Helps computers learn language more like humans.