G-KV: Decoding-Time KV Cache Eviction with Global Attention
By: Mengqi Liao , Lu Wang , Chaoyun Zhang and more
Potential Business Impact:
Makes AI remember more without slowing down.
Recent reasoning large language models (LLMs) excel in complex tasks but encounter significant computational and memory challenges due to long sequence lengths. KV cache compression has emerged as an effective approach to greatly enhance the efficiency of reasoning. However, existing methods often focus on prompt compression or token eviction with local attention score, overlooking the long-term importance of tokens. We propose G-KV, a KV cache eviction method that employs a global scoring mechanism, combining local and historical attention scores to more accurately assess token importance. Additionally, we introduce post-training techniques, including reinforcement learning and distillation, to optimize models for compressed KV cache settings. The code of this paper is available on: https://github.com/microsoft/G-KV.
Similar Papers
CompressKV: Semantic Retrieval Heads Know What Tokens are Not Important Before Generation
Computation and Language
Makes AI remember more without slowing down.
SmallKV: Small Model Assisted Compensation of KV Cache Compression for Efficient LLM Inference
Machine Learning (CS)
Makes AI remember more without slowing down.
Hold Onto That Thought: Assessing KV Cache Compression On Reasoning
Computation and Language
Helps AI remember more for complex thinking.