Video Spatial Reasoning with Object-Centric 3D Rollout
By: Haoran Tang , Meng Cao , Ruyang Liu and more
Potential Business Impact:
Teaches computers to understand 3D object locations in videos.
Recent advances in Multi-modal Large Language Models (MLLMs) have showcased remarkable capabilities in vision-language understanding. However, enabling robust video spatial reasoning-the ability to comprehend object locations, orientations, and inter-object relationships in dynamic 3D scenes-remains a key unsolved challenge. Existing approaches primarily rely on spatially grounded supervised fine-tuning or reinforcement learning, yet we observe that such models often exhibit query-locked reasoning, focusing narrowly on objects explicitly mentioned in the prompt while ignoring critical contextual cues. To address this limitation, we propose Object-Centric 3D Rollout (OCR), a novel strategy that introduces structured perturbations to the 3D geometry of selected objects during training. By degrading object-specific visual cues and projecting the altered geometry into 2D space, OCR compels the model to reason holistically across the entire scene. We further design a rollout-based training pipeline that jointly leverages vanilla and region-noisy videos to optimize spatial reasoning trajectories. Experiments demonstrate state-of-the-art performance: our 3B-parameter model achieves 47.5% accuracy on VSI-Bench, outperforming several 7B baselines. Ablations confirm OCR's superiority over prior rollout strategies (e.g., T-GRPO, NoisyRollout).
Similar Papers
Video2Layout: Recall and Reconstruct Metric-Grounded Cognitive Map for Spatial Reasoning
CV and Pattern Recognition
Helps computers understand 3D space from videos.
Vision-Language Memory for Spatial Reasoning
CV and Pattern Recognition
Robots understand 3D space better from videos.
Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models
CV and Pattern Recognition
Teaches computers to understand 3D objects from different views.