MixKVQ: Query-Aware Mixed-Precision KV Cache Quantization for Long-Context Reasoning
By: Tao Zhang , Ziqian Zeng , Hao Peng and more
Long Chain-of-Thought (CoT) reasoning has significantly advanced the capabilities of Large Language Models (LLMs), but this progress is accompanied by substantial memory and latency overhead from the extensive Key-Value (KV) cache. Although KV cache quantization is a promising compression technique, existing low-bit quantization methods often exhibit severe performance degradation on complex reasoning tasks. Fixed-precision quantization struggles to handle outlier channels in the key cache, while current mixed-precision strategies fail to accurately identify components requiring high-precision representation. We find that an effective low-bit KV cache quantization strategy must consider two factors: a key channel's intrinsic quantization difficulty and its relevance to the query. Based on this insight, we propose MixKVQ, a novel plug-and-play method that introduces a lightweight, query-aware algorithm to identify and preserve critical key channels that need higher precision, while applying per-token quantization for value cache. Experiments on complex reasoning datasets demonstrate that our approach significantly outperforms existing low-bit methods, achieving performance comparable to a full-precision baseline at a substantially reduced memory footprint.
Similar Papers
KV Pareto: Systems-Level Optimization of KV Cache and Model Compression for Long Context Inference
Machine Learning (CS)
Makes AI remember more without using much memory.
Plug-and-Play 1.x-Bit KV Cache Quantization for Video Large Language Models
CV and Pattern Recognition
Makes AI watch videos using less computer memory.
XQuant: Achieving Ultra-Low Bit KV Cache Quantization with Cross-Layer Compression
Computation and Language
Makes AI remember more with less computer memory.