Score: 1

Multimodal Spatial Reasoning in the Large Model Era: A Survey and Benchmarks

Published: October 29, 2025 | arXiv ID: 2510.25760v1

By: Xu Zheng , Zihao Dongfang , Lutao Jiang and more

Potential Business Impact:

Helps computers understand places like we do.

Business Areas:
Geospatial Data and Analytics, Navigation and Mapping

Humans possess spatial reasoning abilities that enable them to understand spaces through multimodal observations, such as vision and sound. Large multimodal reasoning models extend these abilities by learning to perceive and reason, showing promising performance across diverse spatial tasks. However, systematic reviews and publicly available benchmarks for these models remain limited. In this survey, we provide a comprehensive review of multimodal spatial reasoning tasks with large models, categorizing recent progress in multimodal large language models (MLLMs) and introducing open benchmarks for evaluation. We begin by outlining general spatial reasoning, focusing on post-training techniques, explainability, and architecture. Beyond classical 2D tasks, we examine spatial relationship reasoning, scene and layout understanding, as well as visual question answering and grounding in 3D space. We also review advances in embodied AI, including vision-language navigation and action models. Additionally, we consider emerging modalities such as audio and egocentric video, which contribute to novel spatial understanding through new sensors. We believe this survey establishes a solid foundation and offers insights into the growing field of multimodal spatial reasoning. Updated information about this survey, codes and implementation of the open benchmarks can be found at https://github.com/zhengxuJosh/Awesome-Spatial-Reasoning.

Repos / Data Links

Page Count
34 pages

Category
Computer Science:
CV and Pattern Recognition