VLCache: Computing 2% Vision Tokens and Reusing 98% for Vision-Language Inference
By: Shengling Qin , Hao Yu , Chenxin Wu and more
Potential Business Impact:
Saves computer power by remembering past work.
This paper presents VLCache, a cache reuse framework that exploits both Key-Value (KV) cache and encoder cache from prior multimodal inputs to eliminate costly recomputation when the same multimodal inputs recur. Unlike previous heuristic approaches, we formally identify the cumulative reuse error effect and demonstrate how to minimize the non-prefix cache reuse error effectively. We further analyze the varying importance of model layers and propose a dynamic, layer-aware recomputation strategy to balance accuracy and efficiency. Experimental results show that VLCache achieves an accuracy on par with full recomputation, while requiring only 2-5% of the tokens to compute, yielding 1.2x-16x TTFT speedups. The proposed VLCache pipeline has been integrated into SGLang, enabling significantly faster inference in practical deployments.
Similar Papers
LightVLM: Acceleraing Large Multimodal Models with Pyramid Token Merging and KV Cache Compression
CV and Pattern Recognition
Makes AI understand pictures much faster.
CacheFlow: Compressive Streaming Memory for Efficient Long-Form Video Understanding
CV and Pattern Recognition
Lets computers watch long videos and answer questions.
SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs
Computation and Language
Lets AI remember longer stories without forgetting.