Transformer Based Linear Attention with Optimized GPU Kernel Implementation
By: Armin Gerami, Ramani Duraiswami
Potential Business Impact:
Makes AI learn faster and use less memory.
The original softmax-based attention mechanism (regular attention) in the extremely successful Transformer architecture computes attention between $N$ tokens, each embedded in a $D$-dimensional head, with a time complexity of $O(N^2D)$. Given the success of Transformers, improving their runtime during both training and inference is a popular research area. One such approach is the introduction of the linear attention (LA) mechanisms, which offers a linear time complexity of $O(ND^2)$ and have demonstrated comparable accuracy to regular attention. However, LA in practice lags behind its theoretical efficiency. We propose a novel method for LA's forward and backward passes, along with a highly-optimized CUDA implementation. Our approach outperforms the state-of-the-art by 3.3 times in speed and reduces memory consumption by 3.6 times. We validate these improvements in both single-layer and end-to-end settings by training a 1.4 billion parameter language model, which demonstrates similar expressivity to regular attention on major reasoning benchmarks.
Similar Papers
LUNA: Linear Universal Neural Attention with Generalization Guarantees
Machine Learning (CS)
Learns better computer understanding for long texts.
Efficient High-Accuracy PDEs Solver with the Linear Attention Neural Operator
Machine Learning (CS)
Computers solve science problems faster, more accurately.
Log-Linear Attention
Machine Learning (CS)
Makes AI understand long stories better, faster.