Score: 1

FastVLM: Self-Speculative Decoding for Fast Vision-Language Model Inference

Published: October 26, 2025 | arXiv ID: 2510.22641v1

By: Divya Jyoti Bajpai, Manjesh Kumar Hanawal

Potential Business Impact:

Makes AI understand pictures and answer questions faster.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-language Models (VLMs) have made significant strides in visual understanding and query response generation, but often face challenges of high computational cost and inference latency due to autoregressive decoding. In this work, we introduce an imitation-learning-based Self-Speculative Decoding (SSD) framework, named FastVLM, to address these limitations. Our approach employs a lightweight draft model for token generation in an autoregressive manner, while a full model verifies these tokens non-autoregressively. Accepted tokens proceed seamlessly, while rejected tokens are corrected by the full model and used to guide the draft model's refinement. Through an imitation network, FastVLM enhances the draft model by integrating deeper level insights from the full model's architecture. Also, it maintains the performance integrity of the full model while training the draft model, achieving a balance between efficiency and accuracy. Our method speeds up the inference process by 1.55-1.85x as compared to the final layer with minimal loss in performance.

Country of Origin
🇮🇳 India

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)