From Sight to Insight: Improving Visual Reasoning Capabilities of Multimodal Models via Reinforcement Learning
By: Omar Sharif, Eftekhar Hossain, Patrick Ng
Potential Business Impact:
Helps AI see and think better to solve puzzles.
Reinforcement learning (RL) has emerged as a promising approach for eliciting reasoning chains before generating final answers. However, multimodal large language models (MLLMs) generate reasoning that lacks integration of visual information. This limits their ability to solve problems that demand accurate visual perception, such as visual puzzles. We show that visual perception is the key bottleneck in such tasks: converting images into textual descriptions significantly improves performance, yielding gains of 26.7% for Claude 3.5 and 23.6% for Claude 3.7. To address this, we investigate reward-driven RL as a mechanism to unlock long visual reasoning in open-source MLLMs without requiring costly supervision. We design and evaluate six reward functions targeting different reasoning aspects, including image understanding, thinking steps, and answer accuracy. Using group relative policy optimization (GRPO), our approach explicitly incentivizes longer, structured reasoning and mitigates bypassing of visual information. Experiments on Qwen-2.5-VL-7B achieve 5.56% improvements over the base model, with consistent gains across both in-domain and out-of-domain settings.
Similar Papers
More Than the Final Answer: Improving Visual Extraction and Logical Consistency in Vision-Language Models
CV and Pattern Recognition
Makes AI better at seeing and thinking.
Perception-R1: Advancing Multimodal Reasoning Capabilities of MLLMs via Visual Perception Reward
Machine Learning (CS)
Teaches computers to see and think better.
Learning Only with Images: Visual Reinforcement Learning with Reasoning, Rendering, and Visual Feedback
CV and Pattern Recognition
Computers learn to understand pictures without words.