Beyond Homogeneous Attention: Memory-Efficient LLMs via Fourier-Approximated KV Cache
By: Xiaoran Liu , Siyang He , Qiqi Wang and more
Potential Business Impact:
Makes AI remember more without slowing down.
Large Language Models struggle with memory demands from the growing Key-Value (KV) cache as context lengths increase. Existing compression methods homogenize head dimensions or rely on attention-guided token pruning, often sacrificing accuracy or introducing computational overhead. We propose FourierAttention, a training-free framework that exploits the heterogeneous roles of transformer head dimensions: lower dimensions prioritize local context, while upper ones capture long-range dependencies. By projecting the long-context-insensitive dimensions onto orthogonal Fourier bases, FourierAttention approximates their temporal evolution with fixed-length spectral coefficients. Evaluations on LLaMA models show that FourierAttention achieves the best long-context accuracy on LongBench and Needle-In-A-Haystack (NIAH). Besides, a custom Triton kernel, FlashFourierAttention, is designed to optimize memory via streamlined read-write operations, enabling efficient deployment without performance compromise.
Similar Papers
Lag-Relative Sparse Attention In Long Context Training
Computation and Language
Helps computers remember more of long stories.
Training-free Context-adaptive Attention for Efficient Long Context Modeling
Computation and Language
Makes AI understand long texts faster.
Retrospective Sparse Attention for Efficient Long-Context Generation
Computation and Language
Fixes AI mistakes in long stories.