EpiCache: Episodic KV Cache Management for Long Conversational Question Answering
By: Minsoo Kim , Arnav Kundu , Han-Byul Kim and more
Potential Business Impact:
AI remembers long talks without using much memory.
Modern large language models (LLMs) extend context lengths to up to millions of tokens, enabling AI assistants to generate coherent and personalized responses grounded in long conversational histories. This ability, however, hinges on Key-Value (KV) caching, whose memory grows linearly with dialogue length and quickly becomes the bottleneck in resource-constrained environments. An active line of research for reducing memory bottleneck is KV cache compression, which seeks to limit cache size while preserving accuracy. Yet existing methods face two major limitations: (i) evicting the KV cache after full-context prefill causes unbounded peak memory, and (ii) query-dependent eviction narrows the cache to a single query, leading to failure cases in multi-turn conversations. We introduce EpiCache, a training-free KV cache management framework for long conversational question answering (LongConvQA) under fixed memory budgets. EpiCache bounds cache growth through block-wise prefill and preserves topic-relevant context via episodic KV compression, which clusters conversation history into coherent episodes and applies episode-specific KV cache eviction. We further design an adaptive layer-wise budget allocation strategy that measures each layer's sensitivity to eviction and distributes the memory budget across layers accordingly. Across three LongConvQA benchmarks, EpiCache improves accuracy by up to 40% over recent baselines, sustains near-full KV accuracy under 4-6x compression, and reduces latency and memory by up to 2.4x and 3.5x, thereby enabling efficient multi-turn interaction under strict resource constraints.
Similar Papers
Hold Onto That Thought: Assessing KV Cache Compression On Reasoning
Computation and Language
Helps AI remember more for complex thinking.
Dialogue Without Limits: Constant-Sized KV Caches for Extended Responses in LLMs
Computation and Language
Keeps AI remembering more without using more memory.
CAKE: Cascading and Adaptive KV Cache Eviction with Layer Preferences
Computation and Language
Saves computer memory for faster AI.