MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs
By: Erik Daxberger , Nina Wenzel , David Griffiths and more
Potential Business Impact:
Teaches computers to understand 3D spaces like rooms.
Multimodal large language models (MLLMs) excel at 2D visual understanding but remain limited in their ability to reason about 3D space. In this work, we leverage large-scale high-quality 3D scene data with open-set annotations to introduce 1) a novel supervised fine-tuning dataset and 2) a new evaluation benchmark, focused on indoor scenes. Our Cubify Anything VQA (CA-VQA) data covers diverse spatial tasks including spatial relationship prediction, metric size and distance estimation, and 3D grounding. We show that CA-VQA enables us to train MM-Spatial, a strong generalist MLLM that also achieves state-of-the-art performance on 3D spatial understanding benchmarks, including our own. We show how incorporating metric depth and multi-view inputs (provided in CA-VQA) can further improve 3D understanding, and demonstrate that data alone allows our model to achieve depth perception capabilities comparable to dedicated monocular depth estimation models.
Similar Papers
How to Enable LLM with 3D Capacity? A Survey of Spatial Reasoning in LLM
CV and Pattern Recognition
Helps computers understand 3D worlds like we do.
SD-VLM: Spatial Measuring and Understanding with Depth-Encoded Vision-Language Models
CV and Pattern Recognition
Helps computers understand 3D space from pictures.
SpatialLLM: A Compound 3D-Informed Design towards Spatially-Intelligent Large Multimodal Models
CV and Pattern Recognition
Teaches computers to understand 3D space like humans.