Higher-order Linear Attention
By: Yifan Zhang, Zhen Qin, Quanquan Gu
Potential Business Impact:
Makes AI understand long stories faster.
The quadratic cost of scaled dot-product attention is a central obstacle to scaling autoregressive language models to long contexts. Linear-time attention and State Space Models (SSMs) provide scalable alternatives but are typically restricted to first-order or kernel-based approximations, which can limit expressivity. We introduce Higher-order Linear Attention (HLA), a causal, streaming mechanism that realizes higher interactions via compact prefix sufficient statistics. In the second-order case, HLA maintains a constant-size state and computes per-token outputs in linear time without materializing any $n \times n$ matrices. We give closed-form streaming identities, a strictly causal masked variant using two additional summaries, and a chunk-parallel training scheme based on associative scans that reproduces the activations of a serial recurrence exactly. We further outline extensions to third and higher orders. Collectively, these results position HLA as a principled, scalable building block that combines attention-like, data-dependent mixing with the efficiency of modern recurrent architectures. Project Page: https://github.com/yifanzhang-pro/HLA.
Similar Papers
Long-Context Modeling with Dynamic Hierarchical Sparse Attention for On-Device LLMs
Computation and Language
Makes AI understand long texts faster and cheaper.
ENA: Efficient N-dimensional Attention
Machine Learning (CS)
Helps computers understand complex, long data faster.
Making Every Head Count: Sparse Attention Without the Speed-Performance Trade-off
Machine Learning (CS)
Makes AI understand long texts much faster.