KV-CAR: KV Cache Compression using Autoencoders and KV Reuse in Large Language Models
By: Sourjya Roy , Shrihari Sridharan , Surya Selvam and more
Potential Business Impact:
Makes AI remember more, use less computer memory.
As Large Language Models (LLMs) scale in size and context length, the memory requirements of the key value (KV) cache have emerged as a major bottleneck during autoregressive decoding. The KV cache grows with sequence length and embedding dimension, often exceeding the memory footprint of the model itself and limiting achievable batch sizes and context windows. To address this challenge, we present KV CAR, a unified and architecture agnostic framework that significantly reduces KV cache storage while maintaining model fidelity. KV CAR combines two complementary techniques. First, a lightweight autoencoder learns compact representations of key and value tensors along the embedding dimension, compressing them before they are stored in the KV cache and restoring them upon retrieval. Second, a similarity driven reuse mechanism identifies opportunities to reuse KV tensors of specific attention heads across adjacent layers. Together, these methods reduce the dimensional and structural redundancy in KV tensors without requiring changes to the transformer architecture. Evaluations on GPT 2 and TinyLLaMA models across Wikitext, C4, PIQA, and Winogrande datasets demonstrate that KV CAR achieves up to 47.85 percent KV cache memory reduction with minimal impact on perplexity and zero shot accuracy. System level measurements on an NVIDIA A40 GPU show that the reduced KV footprint directly translates into longer sequence lengths and larger batch sizes during inference. These results highlight the effectiveness of KV CAR in enabling memory efficient LLM inference.
Similar Papers
KV Pareto: Systems-Level Optimization of KV Cache and Model Compression for Long Context Inference
Machine Learning (CS)
Makes AI remember more without using much memory.
AMS-KV: Adaptive KV Caching in Multi-Scale Visual Autoregressive Transformers
CV and Pattern Recognition
Makes AI create better pictures using less computer memory.
SmallKV: Small Model Assisted Compensation of KV Cache Compression for Efficient LLM Inference
Machine Learning (CS)
Makes AI remember more without slowing down.