SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching
By: Yuxuan Zhu , Ali Falahati , David H. Yang and more
Potential Business Impact:
Makes AI understand long stories faster.
Large language models face significant computational and memory challenges when processing long contexts. During inference, efficient management of the key-value (KV) cache, which stores intermediate activations for autoregressive generation, is critical to reducing memory overhead and improving computational efficiency. Traditional token-level efficient KV caching methods overlook semantic information, treating tokens independently without considering their semantic relationships. Meanwhile, existing semantic-preserving KV cache management approaches often suffer from substantial memory usage and high time-to-first-token. To address these limitations, we propose SentenceKV, a novel sentence-level semantic KV caching approach designed to enhance inference efficiency while preserving semantic coherence. During prefilling, SentenceKV groups tokens based on sentence-level semantic similarity, compressing sentence representations into concise semantic vectors stored directly on the GPU, while individual KV pairs are offloaded to CPU. During decoding, SentenceKV generates tokens by selectively retrieving semantically relevant sentence-level KV entries, leveraging the semantic similarity between the prefilling-stage semantic vectors and decoding-stage queries. This ensures efficient and contextually accurate predictions, minimizing the loading of redundant or irrelevant data into GPU memory and significantly reducing memory overhead while maintaining stable inference latency, even for extremely long contexts. Extensive evaluations on benchmarks including PG-19, LongBench, and Needle-In-A-Haystack demonstrate that SentenceKV significantly outperforms state-of-the-art methods in both efficiency and memory usage, without compromising model accuracy.
Similar Papers
SkipKV: Selective Skipping of KV Generation and Storage for Efficient Inference with Large Reasoning Models
Artificial Intelligence
Makes AI think faster and use less memory.
H1B-KV: Hybrid One-Bit Caches for Memory-Efficient Large Language Model Inference
Computation and Language
Makes AI remember more without using much memory.
LouisKV: Efficient KV Cache Retrieval for Long Input-Output Sequences
Machine Learning (CS)
Makes AI understand long stories faster.