SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition
By: Peiran Xu , Sudong Wang , Yao Zhu and more
Potential Business Impact:
Tests how well computers understand space and plan.
Spatial cognition is fundamental to real-world multimodal intelligence, allowing models to effectively interact with the physical environment. While multimodal large language models (MLLMs) have made significant strides, existing benchmarks often oversimplify spatial cognition, reducing it to a single-dimensional metric, which fails to capture the hierarchical structure and interdependence of spatial abilities. To address this gap, we propose a hierarchical spatial cognition framework that decomposes spatial intelligence into five progressively complex levels from basic observation to high-level planning. Building upon this taxonomy, we construct SpatialBench, a large-scale, fine-grained benchmark covering 15 tasks aligned with these cognitive levels. To provide a unified evaluation across heterogeneous tasks, we further introduce a high-level capability-oriented metric that reliably assesses a model's overall spatial reasoning ability. Extensive experiments over massive MLLMs reveal distinct performance stratification across cognitive levels: models exhibit strong perceptual grounding yet remain limited in symbolic reasoning, causal inference, and planning. Additional human tests demonstrate that humans perform selective, goal-directed abstraction, while MLLMs tend to over-attend to surface details without coherent spatial intent. Our work establishes the first systematic framework for measuring hierarchical spatial cognition in MLLMs, laying the foundation for future spatially intelligent systems.
Similar Papers
11Plus-Bench: Demystifying Multimodal LLM Spatial Reasoning with Cognitive-Inspired Analysis
Computation and Language
Tests if AI can think about space like people.
Multimodal Spatial Reasoning in the Large Model Era: A Survey and Benchmarks
CV and Pattern Recognition
Helps computers understand spaces like humans do.
Multimodal Spatial Reasoning in the Large Model Era: A Survey and Benchmarks
CV and Pattern Recognition
Helps computers understand places like we do.