SmolVLM: Redefining small and efficient multimodal models
By: Andrés Marafioti , Orr Zohar , Miquel Farré and more
Potential Business Impact:
Makes smart AI work on phones, not just big computers.
Large Vision-Language Models (VLMs) deliver exceptional performance but require significant computational resources, limiting their deployment on mobile and edge devices. Smaller VLMs typically mirror design choices of larger models, such as extensive image tokenization, leading to inefficient GPU memory usage and constrained practicality for on-device applications. We introduce SmolVLM, a series of compact multimodal models specifically engineered for resource-efficient inference. We systematically explore architectural configurations, tokenization strategies, and data curation optimized for low computational overhead. Through this, we identify key design choices that yield substantial performance gains on image and video tasks with minimal memory footprints. Our smallest model, SmolVLM-256M, uses less than 1GB GPU memory during inference and outperforms the 300-times larger Idefics-80B model, despite an 18-month development gap. Our largest model, at 2.2B parameters, rivals state-of-the-art VLMs consuming twice the GPU memory. SmolVLM models extend beyond static images, demonstrating robust video comprehension capabilities. Our results emphasize that strategic architectural optimizations, aggressive yet efficient tokenization, and carefully curated training data significantly enhance multimodal performance, facilitating practical, energy-efficient deployments at significantly smaller scales.
Similar Papers
Small Vision-Language Models: A Survey on Compact Architectures and Techniques
CV and Pattern Recognition
Makes AI understand pictures and words with less power.
A Survey on Efficient Vision-Language Models
CV and Pattern Recognition
Makes smart AI work on small, slow devices.
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
Machine Learning (CS)
Makes robots understand and do tasks from words.