Score: 1

A Survey on Efficient Vision-Language Models

Published: April 13, 2025 | arXiv ID: 2504.09724v3

By: Gaurav Shinde , Anuradha Ravi , Emon Dey and more

Potential Business Impact:

Makes smart AI work on small, slow devices.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-language models (VLMs) integrate visual and textual information, enabling a wide range of applications such as image captioning and visual question answering, making them crucial for modern AI systems. However, their high computational demands pose challenges for real-time applications. This has led to a growing focus on developing efficient vision language models. In this survey, we review key techniques for optimizing VLMs on edge and resource-constrained devices. We also explore compact VLM architectures, frameworks and provide detailed insights into the performance-memory trade-offs of efficient VLMs. Furthermore, we establish a GitHub repository at https://github.com/MPSCUMBC/Efficient-Vision-Language-Models-A-Survey to compile all surveyed papers, which we will actively update. Our objective is to foster deeper research in this area.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
CV and Pattern Recognition