Multi-SpatialMLLM: Multi-Frame Spatial Understanding with Multi-Modal Large Language Models
By: Runsen Xu , Weiyao Wang , Hao Tang and more
Potential Business Impact:
Helps robots understand moving scenes over time.
Multi-modal large language models (MLLMs) have rapidly advanced in visual tasks, yet their spatial understanding remains limited to single images, leaving them ill-suited for robotics and other real-world applications that require multi-frame reasoning. In this paper, we propose a framework to equip MLLMs with robust multi-frame spatial understanding by integrating depth perception, visual correspondence, and dynamic perception. Central to our approach is the MultiSPA dataset, a novel, large-scale collection of more than 27 million samples spanning diverse 3D and 4D scenes. Alongside MultiSPA, we introduce a comprehensive benchmark that tests a wide spectrum of spatial tasks under uniform metrics. Our resulting model, Multi-SpatialMLLM, achieves significant gains over baselines and proprietary systems, demonstrating scalable, generalizable multi-frame reasoning. We further observe multi-task benefits and early indications of emergent capabilities in challenging scenarios, and showcase how our model can serve as a multi-frame reward annotator for robotics.
Similar Papers
Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence
CV and Pattern Recognition
Helps computers understand 3D space from flat pictures.
MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs
CV and Pattern Recognition
Teaches computers to understand 3D spaces like rooms.
Spatial 3D-LLM: Exploring Spatial Awareness in 3D Vision-Language Models
CV and Pattern Recognition
Helps computers understand 3D spaces better.