MiMo-VL Technical Report
By: Xiaomi LLM-Core Team , : , Zihao Yue and more
Potential Business Impact:
Helps computers understand pictures and words better.
We open-source MiMo-VL-7B-SFT and MiMo-VL-7B-RL, two powerful vision-language models delivering state-of-the-art performance in both general visual understanding and multimodal reasoning. MiMo-VL-7B-RL outperforms Qwen2.5-VL-7B on 35 out of 40 evaluated tasks, and scores 59.4 on OlympiadBench, surpassing models with up to 78B parameters. For GUI grounding applications, it sets a new standard with 56.1 on OSWorld-G, even outperforming specialized models such as UI-TARS. Our training combines four-stage pre-training (2.4 trillion tokens) with Mixed On-policy Reinforcement Learning (MORL) integrating diverse reward signals. We identify the importance of incorporating high-quality reasoning data with long Chain-of-Thought into pre-training stages, and the benefits of mixed RL despite challenges in simultaneous multi-domain optimization. We also contribute a comprehensive evaluation suite covering 50+ tasks to promote reproducibility and advance the field. The model checkpoints and full evaluation suite are available at https://github.com/XiaomiMiMo/MiMo-VL.
Similar Papers
MiMo: Unlocking the Reasoning Potential of Language Model -- From Pretraining to Posttraining
Computation and Language
Helps computers solve math and code problems.
Kimi-VL Technical Report
CV and Pattern Recognition
Computer understands images, videos, and text better.
SAIL-VL2 Technical Report
CV and Pattern Recognition
Lets computers understand pictures and videos better.