KVSwap: Disk-aware KV Cache Offloading for Long-Context On-device Inference
By: Huawei Zhang, Chunwei Xia, Zheng Wang
Potential Business Impact:
Lets phones run smart AI without internet.
Language models (LMs) underpin emerging mobile and embedded AI applications like meeting and video summarization and document analysis, which often require processing multiple long-context inputs. Running an LM locally on-device improves privacy, enables offline use, and reduces cost, but long-context inference quickly hits a \emph{memory capacity wall} as the key-value (KV) cache grows linearly with context length and batch size. We present KVSwap, a software framework to break this memory wall by offloading the KV cache to non-volatile secondary storage (disk). KVSwap leverages the observation that only a small, dynamically changing subset of KV entries is critical for generation. It stores the full cache on disk, uses a compact in-memory metadata to predict which entries to preload, overlaps computation with hardware-aware disk access, and orchestrates read patterns to match storage device characteristics. Our evaluation shows that across representative LMs and storage types, KVSwap delivers higher throughput under tight memory budgets while maintaining the generation quality when compared with existing KV cache offloading schemes.
Similar Papers
Breaking the Boundaries of Long-Context LLM Inference: Adaptive KV Management on a Single Commodity GPU
Operating Systems
Lets computers use big words on your PC.
KV Pareto: Systems-Level Optimization of KV Cache and Model Compression for Long Context Inference
Machine Learning (CS)
Makes AI remember more without using much memory.
KV Cache Compression for Inference Efficiency in LLMs: A Review
Distributed, Parallel, and Cluster Computing
Makes AI smarter and faster using less memory.