A Survey on Efficient Vision-Language Models
By: Gaurav Shinde , Anuradha Ravi , Emon Dey and more
Potential Business Impact:
Makes smart AI work on small, slow devices.
Vision-language models (VLMs) integrate visual and textual information, enabling a wide range of applications such as image captioning and visual question answering, making them crucial for modern AI systems. However, their high computational demands pose challenges for real-time applications. This has led to a growing focus on developing efficient vision language models. In this survey, we review key techniques for optimizing VLMs on edge and resource-constrained devices. We also explore compact VLM architectures, frameworks and provide detailed insights into the performance-memory trade-offs of efficient VLMs. Furthermore, we establish a GitHub repository at https://github.com/MPSCUMBC/Efficient-Vision-Language-Models-A-Survey to compile all surveyed papers, which we will actively update. Our objective is to foster deeper research in this area.
Similar Papers
Vision-Language Models for Edge Networks: A Comprehensive Survey
CV and Pattern Recognition
Makes smart AI work on small, cheap devices.
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and Challenges
CV and Pattern Recognition
Lets computers understand pictures and words together.
Small Vision-Language Models: A Survey on Compact Architectures and Techniques
CV and Pattern Recognition
Makes AI understand pictures and words with less power.