Spec-LLaVA: Accelerating Vision-Language Models with Dynamic Tree-Based Speculative Decoding
By: Mingxiao Huo , Jiayi Zhang , Hewei Wang and more
Potential Business Impact:
Makes AI understand pictures and words much faster.
Vision-Language Models (VLMs) enable powerful multimodal reasoning but suffer from slow autoregressive inference, limiting their deployment in real-time applications. We introduce Spec-LLaVA, a system that applies speculative decoding to accelerate VLMs without sacrificing output quality. Spec-LLaVA pairs a lightweight draft VLM with a large target model: the draft speculates future tokens, which the target verifies in parallel, allowing multiple tokens to be generated per step. To maximize efficiency, we design a dynamic tree-based verification algorithm that adaptively expands and prunes speculative branches using draft model confidence. On MS COCO out-of-domain images, Spec-LLaVA achieves up to 3.28$\times$ faster decoding on LLaVA-1.5 (7B, 13B) with no loss in generation quality. This work presents a lossless acceleration framework for VLMs using dynamic tree-structured speculative decoding, opening a path toward practical real-time multimodal assistants. Importantly, the lightweight draft model design makes the framework amenable to resource-constrained or on-device deployment settings.
Similar Papers
SpecVLM: Fast Speculative Decoding in Vision-Language Models
CV and Pattern Recognition
Makes AI understand pictures and words faster.
SpecVLM: Fast Speculative Decoding in Vision-Language Models
CV and Pattern Recognition
Makes AI understand pictures and text faster.
ViSpec: Accelerating Vision-Language Models with Vision-Aware Speculative Decoding
CV and Pattern Recognition
Makes AI understand pictures and words faster.