SAIL-VL2 Technical Report
By: Weijie Yin , Yongjie Ye , Fangxun Shu and more
Potential Business Impact:
Lets computers understand pictures and videos better.
We introduce SAIL-VL2, an open-suite vision-language foundation model (LVM) for comprehensive multimodal understanding and reasoning. As the successor to SAIL-VL, SAIL-VL2 achieves state-of-the-art performance at the 2B and 8B parameter scales across diverse image and video benchmarks, demonstrating strong capabilities from fine-grained perception to complex reasoning. Its effectiveness is driven by three core innovations. First, a large-scale data curation pipeline with scoring and filtering strategies enhances both quality and distribution across captioning, OCR, QA, and video data, improving training efficiency. Second, a progressive training framework begins with a powerful pre-trained vision encoder (SAIL-ViT), advances through multimodal pre-training, and culminates in a thinking-fusion SFT-RL hybrid paradigm that systematically strengthens model capabilities. Third, architectural advances extend beyond dense LLMs to efficient sparse Mixture-of-Experts (MoE) designs. With these contributions, SAIL-VL2 demonstrates competitive performance across 106 datasets and achieves state-of-the-art results on challenging reasoning benchmarks such as MMMU and MathVista. Furthermore, on the OpenCompass leaderboard, SAIL-VL2-2B ranks first among officially released open-source models under the 4B parameter scale, while serving as an efficient and extensible foundation for the open-source multimodal community.
Similar Papers
SAIL-VL2 Technical Report
CV and Pattern Recognition
Lets computers understand pictures and videos better.
The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer
CV and Pattern Recognition
Lets computers see and understand words together.
MiMo-VL Technical Report
Computation and Language
Helps computers understand pictures and words better.