Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
By: Zihan Qiu , Zekun Wang , Bo Zheng and more
Potential Business Impact:
Makes AI understand long texts better.
Gating mechanisms have been widely utilized, from early models like LSTMs and Highway Networks to recent state space models, linear attention, and also softmax attention. Yet, existing literature rarely examines the specific effects of gating. In this work, we conduct comprehensive experiments to systematically investigate gating-augmented softmax attention variants. Specifically, we perform a comprehensive comparison over 30 variants of 15B Mixture-of-Experts (MoE) models and 1.7B dense models trained on a 3.5 trillion token dataset. Our central finding is that a simple modification-applying a head-specific sigmoid gate after the Scaled Dot-Product Attention (SDPA)-consistently improves performance. This modification also enhances training stability, tolerates larger learning rates, and improves scaling properties. By comparing various gating positions and computational variants, we attribute this effectiveness to two key factors: (1) introducing non-linearity upon the low-rank mapping in the softmax attention, and (2) applying query-dependent sparse gating scores to modulate the SDPA output. Notably, we find this sparse gating mechanism mitigates 'attention sink' and enhances long-context extrapolation performance, and we also release related $\href{https://github.com/qiuzh20/gated_attention}{codes}$ and $\href{https://huggingface.co/QwQZh/gated_attention}{models}$ to facilitate future research.
Similar Papers
Gating is Weighting: Understanding Gated Linear Attention through In-context Learning
Machine Learning (CS)
Lets computers learn better by choosing important words.
Mixture of Sparse Attention: Content-Based Learnable Sparse Attention via Expert-Choice Routing
Machine Learning (CS)
Makes AI smarter and faster by focusing on important words.
SAGA: Selective Adaptive Gating for Efficient and Expressive Linear Attention
CV and Pattern Recognition
Makes computers see clearer, faster, and with less memory.