Score: 2

LiteVLM: A Low-Latency Vision-Language Model Inference Pipeline for Resource-Constrained Environments

Published: June 9, 2025 | arXiv ID: 2506.07416v1

By: Jin Huang , Yuchao Jin , Le An and more

BigTech Affiliations: NVIDIA

Potential Business Impact:

Makes robots and cars understand the world faster.

Business Areas:
Image Recognition Data and Analytics, Software

This paper introduces an efficient Vision-Language Model (VLM) pipeline specifically optimized for deployment on embedded devices, such as those used in robotics and autonomous driving. The pipeline significantly reduces the computational overhead by jointly leveraging patch selection to filter irrelevant camera views, a token selection module to reduce input sequence length for the LLM, and speculative decoding to accelerate token generation. Evaluation on the NVIDIA DRIVE Thor platform for automonous driving application, our pipeline achieves $2.5\times$ end-to-end latency reduction without compromising task accuracy. The speed-up further increases to $3.2\times$ when applying FP8 post-training quantization. These results demonstrate our pipeline as a viable solution for enabling real-time VLM deployment in resource-constrained environments.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)