Gating is Weighting: Understanding Gated Linear Attention through In-context Learning
By: Yingcong Li , Davoud Ataee Tarzanagh , Ankit Singh Rawat and more
Potential Business Impact:
Lets computers learn better by choosing important words.
Linear attention methods offer a compelling alternative to softmax attention due to their efficiency in recurrent decoding. Recent research has focused on enhancing standard linear attention by incorporating gating while retaining its computational benefits. Such Gated Linear Attention (GLA) architectures include competitive models such as Mamba and RWKV. In this work, we investigate the in-context learning capabilities of the GLA model and make the following contributions. We show that a multilayer GLA can implement a general class of Weighted Preconditioned Gradient Descent (WPGD) algorithms with data-dependent weights. These weights are induced by the gating mechanism and the input, enabling the model to control the contribution of individual tokens to prediction. To further understand the mechanics of this weighting, we introduce a novel data model with multitask prompts and characterize the optimization landscape of learning a WPGD algorithm. Under mild conditions, we establish the existence and uniqueness (up to scaling) of a global minimum, corresponding to a unique WPGD solution. Finally, we translate these findings to explore the optimization landscape of GLA and shed light on how gating facilitates context-aware learning and when it is provably better than vanilla linear attention.
Similar Papers
GatedFWA: Linear Flash Windowed Attention with Gated Associative Memory
Machine Learning (CS)
Makes AI models learn faster and remember more.
Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
Computation and Language
Makes AI understand long texts better.
Gated KalmaNet: A Fading Memory Layer Through Test-Time Ridge Regression
Machine Learning (CS)
Remembers more of the past for better AI.