SSL4RL: Revisiting Self-supervised Learning as Intrinsic Reward for Visual-Language Reasoning
By: Xiaojun Guo , Runyu Zhou , Yifei Wang and more
Potential Business Impact:
Teaches computers to see and understand better.
Vision-language models (VLMs) have shown remarkable abilities by integrating large language models with visual inputs. However, they often fail to utilize visual evidence adequately, either depending on linguistic priors in vision-centric tasks or resorting to textual shortcuts during reasoning. Although reinforcement learning (RL) can align models with desired behaviors, its application to VLMs has been hindered by the lack of scalable and reliable reward mechanisms. To overcome this challenge, we propose SSL4RL, a novel framework that leverages self-supervised learning (SSL) tasks as a source of verifiable rewards for RL-based fine-tuning. Our approach reformulates SSL objectives-such as predicting image rotation or reconstructing masked patches-into dense, automatic reward signals, eliminating the need for human preference data or unreliable AI evaluators. Experiments show that SSL4RL substantially improves performance on both vision-centric and vision-language reasoning benchmarks. Furthermore, through systematic ablations, we identify key factors-such as task difficulty, model scale, and semantic alignment with the target domain-that influence the effectiveness of SSL4RL tasks, offering new design principles for future work. We also demonstrate the framework's generality by applying it to graph learning, where it yields significant gains. SSL4RL establishes a versatile and effective paradigm for aligning multimodal models using verifiable, self-supervised objectives.
Similar Papers
VideoSSR: Video Self-Supervised Reinforcement Learning
CV and Pattern Recognition
Teaches computers to understand videos better automatically.
Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised Reinforcement Learning
CV and Pattern Recognition
Teaches computers to understand 3D space from pictures.
Masked-and-Reordered Self-Supervision for Reinforcement Learning from Verifiable Rewards
Computation and Language
Teaches computers to solve math problems better.