Randomization Boosts KV Caching, Learning Balances Query Load: A Joint Perspective
By: Fangzhou Wu , Sandeep Silwal , Qiuyi and more
Potential Business Impact:
Makes AI answer questions much faster.
KV caching is a fundamental technique for accelerating Large Language Model (LLM) inference by reusing key-value (KV) pairs from previous queries, but its effectiveness under limited memory is highly sensitive to the eviction policy. The default Least Recently Used (LRU) eviction algorithm struggles with dynamic online query arrivals, especially in multi-LLM serving scenarios, where balancing query load across workers and maximizing cache hit rate of each worker are inherently conflicting objectives. We give the first unified mathematical model that captures the core trade-offs between KV cache eviction and query routing. Our analysis reveals the theoretical limitations of existing methods and leads to principled algorithms that integrate provably competitive randomized KV cache eviction with learning-based methods to adaptively route queries with evolving patterns, thus balancing query load and cache hit rate. Our theoretical results are validated by extensive experiments across 4 benchmarks and 3 prefix-sharing settings, demonstrating improvements of up to 6.92$\times$ in cache hit rate, 11.96$\times$ reduction in latency, 14.06$\times$ reduction in time-to-first-token (TTFT), and 77.4% increase in throughput over the state-of-the-art methods. Our code is available at https://github.com/fzwark/KVRouting.
Similar Papers
Fast KVzip: Efficient and Accurate LLM Inference with Gated KV Eviction
Machine Learning (CS)
Saves computer memory for faster AI.
MixKVQ: Query-Aware Mixed-Precision KV Cache Quantization for Long-Context Reasoning
Machine Learning (CS)
Makes AI think better using less computer memory.
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider
Distributed, Parallel, and Cluster Computing
Makes AI answer faster by remembering past answers.