SpatialTree: How Spatial Abilities Branch Out in MLLMs
By: Yuxi Xiao , Longfei Li , Shen Yan and more
Cognitive science suggests that spatial ability develops progressively-from perception to reasoning and interaction. Yet in multimodal LLMs (MLLMs), this hierarchy remains poorly understood, as most studies focus on a narrow set of tasks. We introduce SpatialTree, a cognitive-science-inspired hierarchy that organizes spatial abilities into four levels: low-level perception (L1), mental mapping (L2), simulation (L3), and agentic competence (L4). Based on this taxonomy, we construct the first capability-centric hierarchical benchmark, thoroughly evaluating mainstream MLLMs across 27 sub-abilities. The evaluation results reveal a clear structure: L1 skills are largely orthogonal, whereas higher-level skills are strongly correlated, indicating increasing interdependency. Through targeted supervised fine-tuning, we uncover a surprising transfer dynamic-negative transfer within L1, but strong cross-level transfer from low- to high-level abilities with notable synergy. Finally, we explore how to improve the entire hierarchy. We find that naive RL that encourages extensive "thinking" is unreliable: it helps complex reasoning but hurts intuitive perception. We propose a simple auto-think strategy that suppresses unnecessary deliberation, enabling RL to consistently improve performance across all levels. By building SpatialTree, we provide a proof-of-concept framework for understanding and systematically scaling spatial abilities in MLLMs.
Similar Papers
SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition
Artificial Intelligence
Tests how well computers understand space and plan.
Scaling and Beyond: Advancing Spatial Reasoning in MLLMs Requires New Recipes
Machine Learning (CS)
Teaches AI to understand where things are.
From Indoor to Open World: Revealing the Spatial Reasoning Gap in MLLMs
CV and Pattern Recognition
Helps AI understand where things are in the real world.