EvoLMM: Self-Evolving Large Multimodal Models with Continuous Rewards
By: Omkar Thawakar , Shravan Venkatraman , Ritesh Thawkar and more
Potential Business Impact:
Teaches AI to learn by asking itself questions.
Recent advances in large multimodal models (LMMs) have enabled impressive reasoning and perception abilities, yet most existing training pipelines still depend on human-curated data or externally verified reward models, limiting their autonomy and scalability. In this work, we strive to improve LMM reasoning capabilities in a purely unsupervised fashion (without any annotated data or reward distillation). To this end, we propose a self-evolving framework, named EvoLMM, that instantiates two cooperative agents from a single backbone model: a Proposer, which generates diverse, image-grounded questions, and a Solver, which solves them through internal consistency, where learning proceeds through a continuous self-rewarding process. This dynamic feedback encourages both the generation of informative queries and the refinement of structured reasoning without relying on ground-truth or human judgments. When using the popular Qwen2.5-VL as the base model, our EvoLMM yields consistent gains upto $\sim$3\% on multimodal math-reasoning benchmarks, including ChartQA, MathVista, and MathVision, using only raw training images. We hope our simple yet effective approach will serve as a solid baseline easing future research in self-improving LMMs in a fully-unsupervised fashion. Our code and models are available at https://github.com/mbzuai-oryx/EvoLMM.
Similar Papers
EvoLMM: Self-Evolving Large Multimodal Models with Continuous Rewards
CV and Pattern Recognition
AI learns to answer questions by asking itself.
Multi-Agent Evolve: LLM Self-Improve through Co-evolution
Artificial Intelligence
Helps computers learn to solve problems better alone.
Multi-Agent Evolve: LLM Self-Improve through Co-evolution
Artificial Intelligence
Helps computers learn to solve problems better alone.