Kitty: Accurate and Efficient 2-bit KV Cache Quantization with Dynamic Channel-wise Precision Boost
By: Haojun Xia , Xiaoxia Wu , Jisen Li and more
Potential Business Impact:
Makes AI models use much less memory.
The KV cache is a dominant memory bottleneck for LLM inference. While 4-bit KV quantization preserves accuracy, 2-bit often degrades it, especially on long-context reasoning. We close this gap via an algorithm-system co-design for mixed-precision KV caching: Kitty. On the algorithm side, extensive experiments show that Dynamic Channel-wise Precision Boost -- which ranks Key-cache channels by sensitivity and keeps only a small fraction at higher precision -- maintains near-zero loss in accuracy drop while approaching 2-bit memory. The main challenge is handling dynamic 4-bit channel boosts while keeping the page layout coalesced and the dequantization uniform, with no scattered reads or hard-coded masks. Kitty addresses these issues by decompose each mixed-precision Key page into two tensors with unified 2-bit precision. Based on this, Kitty provides a page-centric KV layout, Triton-compatible page dequantization kernels, and a lightweight runtime pipeline that preserves coalescing and avoids divergence. Across seven tasks and two model families (Qwen3, LLaMA3), Kitty cuts KV memory by nearly 8x with negligible accuracy loss, enabling up to 8x larger batches and 2.1x-4.1x higher throughput under the same memory budget. We release the full implementation of Kitty at https://github.com/Summer-Summer/Kitty.
Similar Papers
KV Pareto: Systems-Level Optimization of KV Cache and Model Compression for Long Context Inference
Machine Learning (CS)
Makes AI remember more without using much memory.
XQuant: Achieving Ultra-Low Bit KV Cache Quantization with Cross-Layer Compression
Computation and Language
Makes AI remember more with less computer memory.
Accurate KV Cache Quantization with Outlier Tokens Tracing
Computation and Language
Makes AI use less memory and run faster.