Adaptive KV-Cache Compression without Manually Setting Budget
By: Chenxia Tang , Jianchun Liu , Hongli Xu and more
Potential Business Impact:
Saves computer memory for faster AI answers.
Large language models (LLMs) inference relies heavily on KV-caches to accelerate autoregressive decoding, but the resulting memory footprint grows rapidly with sequence length, posing significant efficiency challenges. Current KV-cache compression methods suffer from a Procrustes' bed problem: they force diverse workloads into fixed compression ratios, leading to suboptimal resource allocation and inference performance. To this end, we present GVote, an adaptive KV-cache compression scheme that eliminates manual budget specification while achieving superior accuracy-efficiency trade-offs. GVote operates on the principle that the important keys are the aggregation of keys required by future queries. The method predicts future query attention demands by Monte-Carlo style sampling potential queries and aggregating selected keys to determine the optimal cache budget without manual specification. Experimental evaluation demonstrates GVote's effectiveness across multiple benchmarks, including GSM8K, RULER and Longbench. Compared to baselines, GVote exhibits 2$\times$ memory reduction while the accuracy maintains higher or comparable.
Similar Papers
G-KV: Decoding-Time KV Cache Eviction with Global Attention
Computation and Language
Makes AI remember more without slowing down.
KV Cache Compression for Inference Efficiency in LLMs: A Review
Distributed, Parallel, and Cluster Computing
Makes AI smarter and faster using less memory.
EvolKV: Evolutionary KV Cache Compression for LLM Inference
Machine Learning (CS)
Makes AI remember more without using more memory.