Score: 0

Small Vision-Language Models: A Survey on Compact Architectures and Techniques

Published: March 9, 2025 | arXiv ID: 2503.10665v1

By: Nitesh Patnaik , Navdeep Nayak , Himani Bansal Agrawal and more

Potential Business Impact:

Makes AI understand pictures and words with less power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The emergence of small vision-language models (sVLMs) marks a critical advancement in multimodal AI, enabling efficient processing of visual and textual data in resource-constrained environments. This survey offers a comprehensive exploration of sVLM development, presenting a taxonomy of architectures - transformer-based, mamba-based, and hybrid - that highlight innovations in compact design and computational efficiency. Techniques such as knowledge distillation, lightweight attention mechanisms, and modality pre-fusion are discussed as enablers of high performance with reduced resource requirements. Through an in-depth analysis of models like TinyGPT-V, MiniGPT-4, and VL-Mamba, we identify trade-offs between accuracy, efficiency, and scalability. Persistent challenges, including data biases and generalization to complex tasks, are critically examined, with proposed pathways for addressing them. By consolidating advancements in sVLMs, this work underscores their transformative potential for accessible AI, setting a foundation for future research into efficient multimodal systems.

Page Count
39 pages

Category
Computer Science:
CV and Pattern Recognition