Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme
By: Yan Ma , Steffi Chern , Xuyang Shen and more
Potential Business Impact:
Makes AI better at understanding pictures and words.
Reinforcement learning (RL) has recently shown strong potential in improving the reasoning capabilities of large language models and is now being actively extended to vision-language models (VLMs). However, existing RL applications in VLMs often rely on heavily engineered frameworks that hinder reproducibility and accessibility, while lacking standardized evaluation protocols, making it difficult to compare results or interpret training dynamics. This work introduces a transparent, from-scratch framework for RL in VLMs, offering a minimal yet functional four-step pipeline validated across multiple models and datasets. In addition, a standardized evaluation scheme is proposed to assess training dynamics and reflective behaviors. Extensive experiments on visual reasoning tasks uncover key empirical findings: response length is sensitive to random seeds, reflection correlates with output length, and RL consistently outperforms supervised fine-tuning (SFT) in generalization, even with high-quality data. These findings, together with the proposed framework, aim to establish a reproducible baseline and support broader engagement in RL-based VLM research.
Similar Papers
VLM-R1: A Stable and Generalizable R1-style Large Vision-Language Model
CV and Pattern Recognition
Makes computers understand pictures better using games.
More Than the Final Answer: Improving Visual Extraction and Logical Consistency in Vision-Language Models
CV and Pattern Recognition
Makes AI better at seeing and thinking.
SynthRL: Scaling Visual Reasoning with Verifiable Data Synthesis
Machine Learning (CS)
Teaches computers to solve harder math problems.