Score: 3

Log-Linear Attention

Published: June 5, 2025 | arXiv ID: 2506.04761v2

By: Han Guo , Songlin Yang , Tarushii Goel and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Makes AI understand long stories better, faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The attention mechanism in Transformers is an important primitive for accurate and scalable sequence modeling. Its quadratic-compute and linear-memory complexity however remain significant bottlenecks. Linear attention and state-space models enable linear-time, constant-memory sequence modeling and can moreover be trained efficiently through matmul-rich parallelization across sequence length. However, at their core these models are still RNNs, and thus their use of a fixed-size hidden state to model the context is a fundamental limitation. This paper develops log-linear attention, an attention mechanism that balances linear attention's efficiency and the expressiveness of softmax attention. Log-linear attention replaces the fixed-size hidden state with a logarithmically growing set of hidden states. We show that with a particular growth function, log-linear attention admits a similarly matmul-rich parallel form whose compute cost is log-linear in sequence length. Log-linear attention is a general framework and can be applied on top of existing linear attention variants. As case studies, we instantiate log-linear variants of two recent architectures -- Mamba-2 and Gated DeltaNet -- and find they perform well compared to their linear-time variants.

Country of Origin
🇺🇸 United States


Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)