R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization
By: Jingyi Zhang , Jiaxing Huang , Huanjin Yao and more
Potential Business Impact:
Teaches AI to think through problems, not just copy.
Recent studies generally enhance MLLMs' reasoning capabilities via supervised fine-tuning on high-quality chain-of-thought reasoning data, which often leads models to merely imitate successful reasoning paths without understanding what the wrong reasoning paths are. In this work, we aim to enhance the MLLMs' reasoning ability beyond passively imitating positive reasoning paths. To this end, we design Step-wise Group Relative Policy Optimization (StepGRPO), a new online reinforcement learning framework that enables MLLMs to self-improve reasoning ability via simple, effective and dense step-wise rewarding. Specifically, StepGRPO introduces two novel rule-based reasoning rewards: Step-wise Reasoning Accuracy Reward (StepRAR) and Step-wise Reasoning Validity Reward (StepRVR). StepRAR rewards the reasoning paths that contain necessary intermediate reasoning steps via a soft key-step matching technique, while StepRAR rewards reasoning paths that follow a well-structured and logically consistent reasoning process through a reasoning completeness and logic evaluation strategy. With the proposed StepGRPO, we introduce R1-VL, a series of MLLMs with outstanding capabilities in step-by-step reasoning. Extensive experiments over 8 benchmarks demonstrate the superiority of our methods.
Similar Papers
Stepwise Guided Policy Optimization: Coloring your Incorrect Reasoning in GRPO
Machine Learning (CS)
Helps AI learn from mistakes, not just successes.
OThink-MR1: Stimulating multimodal generalized reasoning capabilities via dynamic reinforcement learning
Machine Learning (CS)
Teaches AI to understand and reason better.
DeepVideo-R1: Video Reinforcement Fine-Tuning via Difficulty-aware Regressive GRPO
CV and Pattern Recognition
Helps AI understand videos better by learning smarter.