Value-State Gated Attention for Mitigating Extreme-Token Phenomena in Transformers
By: Rui Bu , Haofeng Zhong , Wenzheng Chen and more
Potential Business Impact:
Fixes AI mistakes by controlling its focus.
Large models based on the Transformer architecture are susceptible to extreme-token phenomena, such as attention sinks and value-state drains. These issues, which degrade model performance, quantization fidelity, and interpretability, arise from a problematic mutual reinforcement mechanism where the model learns an inefficient 'no-op' behavior by focusing attention on tokens with near-zero value states. In this paper, we propose Value-State Gated Attention (VGA), a simple, dedicated, and stable architectural mechanism for performing 'no-op' attention efficiently by directly breaking this cycle. VGA introduces a learnable, data-dependent gate, computed directly from the value vectors (V), to modulate the output. Through a theoretical analysis of the underlying gradients, we show that gating the value-state with a function of itself is more effective at decoupling value and attention score updates than prior methods that gate on input embeddings. This creates a direct regulatory pathway that allows the model to suppress a token's contribution based on its emergent value representation. Our experiments demonstrate that VGA significantly mitigates the formation of attention sinks and stabilizes value-state norms, leading to improved performance, robust quantization fidelity, and enhanced model interpretability.
Similar Papers
SAGA: Selective Adaptive Gating for Efficient and Expressive Linear Attention
CV and Pattern Recognition
Makes computers see clearer, faster, and with less memory.
Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
Computation and Language
Makes AI understand long texts better.
CroSTAta: Cross-State Transition Attention Transformer for Robotic Manipulation
Robotics
Teaches robots to learn from mistakes.