Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised Reinforcement Learning
By: Yuhong Liu , Beichen Zhang , Yuhang Zang and more
Potential Business Impact:
Teaches computers to understand 3D space from pictures.
Spatial understanding remains a weakness of Large Vision-Language Models (LVLMs). Existing supervised fine-tuning (SFT) and recent reinforcement learning with verifiable rewards (RLVR) pipelines depend on costly supervision, specialized tools, or constrained environments that limit scale. We introduce Spatial-SSRL, a self-supervised RL paradigm that derives verifiable signals directly from ordinary RGB or RGB-D images. Spatial-SSRL automatically formulates five pretext tasks that capture 2D and 3D spatial structure: shuffled patch reordering, flipped patch recognition, cropped patch inpainting, regional depth ordering, and relative 3D position prediction. These tasks provide ground-truth answers that are easy to verify and require no human or LVLM annotation. Training on our tasks substantially improves spatial reasoning while preserving general visual capabilities. On seven spatial understanding benchmarks in both image and video settings, Spatial-SSRL delivers average accuracy gains of 4.63% (3B) and 3.89% (7B) over the Qwen2.5-VL baselines. Our results show that simple, intrinsic supervision enables RLVR at scale and provides a practical route to stronger spatial intelligence in LVLMs.
Similar Papers
SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards
CV and Pattern Recognition
Helps computers understand 3D space like people.
Spatial Preference Rewarding for MLLMs Spatial Understanding
CV and Pattern Recognition
Teaches computers to describe pictures better.
VideoSSR: Video Self-Supervised Reinforcement Learning
CV and Pattern Recognition
Teaches computers to understand videos better automatically.