Score: 0

Beyond Homogeneous Attention: Memory-Efficient LLMs via Fourier-Approximated KV Cache

Published: June 13, 2025 | arXiv ID: 2506.11886v1

By: Xiaoran Liu , Siyang He , Qiqi Wang and more

Potential Business Impact:

Makes AI remember more without slowing down.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models struggle with memory demands from the growing Key-Value (KV) cache as context lengths increase. Existing compression methods homogenize head dimensions or rely on attention-guided token pruning, often sacrificing accuracy or introducing computational overhead. We propose FourierAttention, a training-free framework that exploits the heterogeneous roles of transformer head dimensions: lower dimensions prioritize local context, while upper ones capture long-range dependencies. By projecting the long-context-insensitive dimensions onto orthogonal Fourier bases, FourierAttention approximates their temporal evolution with fixed-length spectral coefficients. Evaluations on LLaMA models show that FourierAttention achieves the best long-context accuracy on LongBench and Needle-In-A-Haystack (NIAH). Besides, a custom Triton kernel, FlashFourierAttention, is designed to optimize memory via streamlined read-write operations, enabling efficient deployment without performance compromise.

Country of Origin
🇨🇳 China

Page Count
10 pages

Category
Computer Science:
Computation and Language