Score: 3

Randomization Boosts KV Caching, Learning Balances Query Load: A Joint Perspective

Published: January 26, 2026 | arXiv ID: 2601.18999v1

By: Fangzhou Wu , Sandeep Silwal , Qiuyi and more

BigTech Affiliations: Google

Potential Business Impact:

Makes AI answer questions much faster.

Business Areas:
A/B Testing Data and Analytics

KV caching is a fundamental technique for accelerating Large Language Model (LLM) inference by reusing key-value (KV) pairs from previous queries, but its effectiveness under limited memory is highly sensitive to the eviction policy. The default Least Recently Used (LRU) eviction algorithm struggles with dynamic online query arrivals, especially in multi-LLM serving scenarios, where balancing query load across workers and maximizing cache hit rate of each worker are inherently conflicting objectives. We give the first unified mathematical model that captures the core trade-offs between KV cache eviction and query routing. Our analysis reveals the theoretical limitations of existing methods and leads to principled algorithms that integrate provably competitive randomized KV cache eviction with learning-based methods to adaptively route queries with evolving patterns, thus balancing query load and cache hit rate. Our theoretical results are validated by extensive experiments across 4 benchmarks and 3 prefix-sharing settings, demonstrating improvements of up to 6.92$\times$ in cache hit rate, 11.96$\times$ reduction in latency, 14.06$\times$ reduction in time-to-first-token (TTFT), and 77.4% increase in throughput over the state-of-the-art methods. Our code is available at https://github.com/fzwark/KVRouting.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)