REM: Evaluating LLM Embodied Spatial Reasoning through Multi-Frame Trajectories
By: Jacob Thompson, Emiliano Garcia-Lopez, Yonatan Bisk
Potential Business Impact:
Teaches AI to understand space like humans.
Humans build viewpoint-independent cognitive maps through navigation, enabling intuitive reasoning about object permanence and spatial relations. We argue that multimodal large language models (MLLMs), despite extensive video training, lack this fundamental spatial reasoning capability, a critical limitation for embodied applications. To demonstrate these limitations and drive research, we introduce REM (Reasoning over Embodied Multi-Frame Trajectories), a benchmark using controllable 3D environments for long-horizon embodied spatial reasoning. REM systematically evaluates key aspects like object permanence/distinction, spatial relationships, and numerical tracking across dynamic embodied viewpoints. Our evaluation shows that the best-performing current models exhibit promising overall performance, but become increasingly unreliable at even moderate complexity levels easily handled by humans. These findings highlight challenges MLLMs face in developing robust spatial representations from sequential visual input. Consequently, REM provides targeted metrics and diagnostics to foster improved spatial understanding in future models.
Similar Papers
Vision to Geometry: 3D Spatial Memory for Sequential Embodied MLLM Reasoning and Exploration
CV and Pattern Recognition
Helps robots learn and remember tasks in new places.
Embodied-R: Collaborative Framework for Activating Embodied Spatial Reasoning in Foundation Models via Reinforcement Learning
Artificial Intelligence
Helps computers understand space by watching videos.
Multimodal Spatial Reasoning in the Large Model Era: A Survey and Benchmarks
CV and Pattern Recognition
Helps computers understand spaces like humans do.