Score: 1

EvoLMM: Self-Evolving Large Multimodal Models with Continuous Rewards

Published: November 20, 2025 | arXiv ID: 2511.16672v2

By: Omkar Thawakar , Shravan Venkatraman , Ritesh Thawkar and more

Potential Business Impact:

Teaches AI to learn by asking itself questions.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Recent advances in large multimodal models (LMMs) have enabled impressive reasoning and perception abilities, yet most existing training pipelines still depend on human-curated data or externally verified reward models, limiting their autonomy and scalability. In this work, we strive to improve LMM reasoning capabilities in a purely unsupervised fashion (without any annotated data or reward distillation). To this end, we propose a self-evolving framework, named EvoLMM, that instantiates two cooperative agents from a single backbone model: a Proposer, which generates diverse, image-grounded questions, and a Solver, which solves them through internal consistency, where learning proceeds through a continuous self-rewarding process. This dynamic feedback encourages both the generation of informative queries and the refinement of structured reasoning without relying on ground-truth or human judgments. When using the popular Qwen2.5-VL as the base model, our EvoLMM yields consistent gains upto $\sim$3\% on multimodal math-reasoning benchmarks, including ChartQA, MathVista, and MathVision, using only raw training images. We hope our simple yet effective approach will serve as a solid baseline easing future research in self-improving LMMs in a fully-unsupervised fashion. Our code and models are available at https://github.com/mbzuai-oryx/EvoLMM.

Country of Origin
🇦🇪 United Arab Emirates

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition