Score: 3

G-KV: Decoding-Time KV Cache Eviction with Global Attention

Published: November 29, 2025 | arXiv ID: 2512.00504v1

By: Mengqi Liao , Lu Wang , Chaoyun Zhang and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Makes AI remember more without slowing down.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent reasoning large language models (LLMs) excel in complex tasks but encounter significant computational and memory challenges due to long sequence lengths. KV cache compression has emerged as an effective approach to greatly enhance the efficiency of reasoning. However, existing methods often focus on prompt compression or token eviction with local attention score, overlooking the long-term importance of tokens. We propose G-KV, a KV cache eviction method that employs a global scoring mechanism, combining local and historical attention scores to more accurately assess token importance. Additionally, we introduce post-training techniques, including reinforcement learning and distillation, to optimize models for compressed KV cache settings. The code of this paper is available on: https://github.com/microsoft/G-KV.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ United States, China

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
Computation and Language