Is Random Attention Sufficient for Sequence Modeling? Disentangling Trainable Components in the Transformer
By: Yihe Dong , Lorenzo Noci , Mikhail Khodak and more
Potential Business Impact:
Lets computers learn by focusing on important words.
The transformer architecture is central to the success of modern Large Language Models (LLMs), in part due to its surprising ability to perform a wide range of tasks - including mathematical reasoning, memorization, and retrieval - using only gradient-based learning on next-token prediction. While the core component of a transformer is the self-attention mechanism, we question how much, and which aspects, of the performance gains can be attributed to it. To this end, we compare standard transformers to variants in which either the MLP layers or the attention weights are frozen at initialization. Surprisingly, we find that attention with frozen key and query weights is not only able to form induction heads, but can also perform competitively on language modeling. We formalize this by proving a new expressivity result for transformer models with frozen key and query weights. To further isolate the contribution of attention, we design MixiT, an architecture with entirely random attention scores, with provably stable signal propagation that overcomes prior depth-wise scaling challenges in random transformers. We use the successes and failures of MixiT to understand the role each transformer component plays, such as attention being largely responsible for in-context reasoning, and MLPs being responsible for, but collaborates with attention, on knowledge storage. Our results suggest that the transformer architecture has a built-in inductive bias towards forming specialized circuits, as it does even without learnable attention weights.
Similar Papers
Deconstructing Attention: Investigating Design Principles for Effective Language Modeling
Computation and Language
Makes computer language models work better and simpler.
Small transformer architectures for task switching
Machine Learning (CS)
Helps AI switch tasks better, like a smart student.
It's All Connected: A Journey Through Test-Time Memorization, Attentional Bias, Retention, and Online Optimization
Machine Learning (CS)
Makes AI remember more and learn faster.