KVComp: A High-Performance, LLM-Aware, Lossy Compression Framework for KV Cache
By: Bo Jiang , Taolue Yang , Youyuan Liu and more
Potential Business Impact:
Makes AI understand long stories with less memory.
Transformer-based large language models (LLMs) demonstrate impressive potential in various practical applications. However, long context inference poses a significant challenge due to the enormous memory requirements of the key-value (KV) cache, which can scale to multiple gigabytes as sequence length and batch size increase. In this paper, we present KVComp, a generic and efficient KV cache management framework optimized for long-text generation that synergistically works with both latency-critical and throughput-critical inference systems. KVComp employs novel lossy compression techniques specifically designed for KV cache data characteristics, featuring careful co-design of compression algorithms and system architecture. Our approach maintains compatibility with the growing nature of KV cache while preserving high computational efficiency. Experimental results show that KVComp achieves on average 47\% and up to 83\% higher memory reduction rate compared to existing methods with little/no model accuracy degradation. Furthermore, KVComp achieves extremely high execution throughput, effectively reducing decompression overhead and, in some cases, even accelerating the matrix-vector multiplication operation and outperform cuBLAS-based attention kernels with less data movement.
Similar Papers
KV Cache Compression for Inference Efficiency in LLMs: A Review
Distributed, Parallel, and Cluster Computing
Makes AI smarter and faster using less memory.
KVCompose: Efficient Structured KV Cache Compression with Composite Tokens
Machine Learning (CS)
Makes AI remember more without using more memory.
KV Pareto: Systems-Level Optimization of KV Cache and Model Compression for Long Context Inference
Machine Learning (CS)
Makes AI remember more without using much memory.