Sparse Attention across Multiple-context KV Cache
By: Ziyi Cao , Qingyi Si , Jingbin Zhang and more
Potential Business Impact:
Makes AI understand long texts faster and cheaper.
Large language models face significant cost challenges in long-sequence inference. To address this, reusing historical Key-Value (KV) Cache for improved inference efficiency has become a mainstream approach. Recent advances further enhance throughput by sparse attention mechanisms to select the most relevant KV Cache, thereby reducing sequence length. However, such techniques are limited to single-context scenarios, where historical KV Cache is computed sequentially with causal-attention dependencies. In retrieval-augmented generation (RAG) scenarios, where retrieved documents as context are unknown beforehand, each document's KV Cache is computed and stored independently (termed multiple-context KV Cache), lacking cross-attention between contexts. This renders existing methods ineffective. Although prior work partially recomputes multiple-context KV Cache to mitigate accuracy loss from missing cross-attention, it requires retaining all KV Cache throughout, failing to reduce memory overhead. This paper presents SamKV, the first exploration of attention sparsification for multiple-context KV Cache. Specifically, SamKV takes into account the complementary information of other contexts when sparsifying one context, and then locally recomputes the sparsified information. Experiments demonstrate that our method compresses sequence length to 15% without accuracy degradation compared with full-recompuation baselines, significantly boosting throughput in multi-context RAG scenarios.
Similar Papers
Retrospective Sparse Attention for Efficient Long-Context Generation
Computation and Language
Fixes AI mistakes in long stories.
Lag-Relative Sparse Attention In Long Context Training
Computation and Language
Helps computers remember more of long stories.
Beyond RAG: Task-Aware KV Cache Compression for Comprehensive Knowledge Reasoning
Computation and Language
Makes AI remember more information faster.