Score: 0

VLCache: Computing 2% Vision Tokens and Reusing 98% for Vision-Language Inference

Published: December 15, 2025 | arXiv ID: 2512.12977v1

By: Shengling Qin , Hao Yu , Chenxin Wu and more

Potential Business Impact:

Saves computer power by remembering past work.

Business Areas:
Image Recognition Data and Analytics, Software

This paper presents VLCache, a cache reuse framework that exploits both Key-Value (KV) cache and encoder cache from prior multimodal inputs to eliminate costly recomputation when the same multimodal inputs recur. Unlike previous heuristic approaches, we formally identify the cumulative reuse error effect and demonstrate how to minimize the non-prefix cache reuse error effectively. We further analyze the varying importance of model layers and propose a dynamic, layer-aware recomputation strategy to balance accuracy and efficiency. Experimental results show that VLCache achieves an accuracy on par with full recomputation, while requiring only 2-5% of the tokens to compute, yielding 1.2x-16x TTFT speedups. The proposed VLCache pipeline has been integrated into SGLang, enabling significantly faster inference in practical deployments.

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition