KQ-SVD: Compressing the KV Cache with Provable Guarantees on Attention Fidelity
By: Damien Lesens, Beheshteh T. Rakhshan, Guillaume Rabusseau
The Key-Value (KV) cache is central to the efficiency of transformer-based large language models (LLMs), storing previously computed vectors to accelerate inference. Yet, as sequence length and batch size grow, the cache becomes a major memory bottleneck. Prior compression methods typically apply low-rank decomposition to keys alone or attempt to jointly embed queries and keys, but both approaches neglect that attention fundamentally depends on their inner products. In this work, we prove that such strategies are suboptimal for approximating the attention matrix. We introduce KQ-SVD, a simple and computationally efficient method that directly performs an optimal low-rank decomposition of the attention matrix via a closed-form solution. By targeting the true source of redundancy, KQ-SVD preserves attention outputs with higher fidelity under compression. Extensive evaluations on LLaMA and Mistral models demonstrate that our approach consistently delivers superior projection quality.
Similar Papers
Value-Guided KV Compression for LLMs via Approximated CUR Decomposition
Computation and Language
Makes AI talk faster and smarter.
G-KV: Decoding-Time KV Cache Eviction with Global Attention
Computation and Language
Makes AI remember more without slowing down.
PureKV: Plug-and-Play KV Cache Optimization with Spatial-Temporal Sparse Attention for Vision-Language Large Models
Multimedia
Makes AI understand videos much faster.