Score: 1

REM: Evaluating LLM Embodied Spatial Reasoning through Multi-Frame Trajectories

Published: November 30, 2025 | arXiv ID: 2512.00736v1

By: Jacob Thompson, Emiliano Garcia-Lopez, Yonatan Bisk

Potential Business Impact:

Teaches AI to understand space like humans.

Business Areas:
Motion Capture Media and Entertainment, Video

Humans build viewpoint-independent cognitive maps through navigation, enabling intuitive reasoning about object permanence and spatial relations. We argue that multimodal large language models (MLLMs), despite extensive video training, lack this fundamental spatial reasoning capability, a critical limitation for embodied applications. To demonstrate these limitations and drive research, we introduce REM (Reasoning over Embodied Multi-Frame Trajectories), a benchmark using controllable 3D environments for long-horizon embodied spatial reasoning. REM systematically evaluates key aspects like object permanence/distinction, spatial relationships, and numerical tracking across dynamic embodied viewpoints. Our evaluation shows that the best-performing current models exhibit promising overall performance, but become increasingly unreliable at even moderate complexity levels easily handled by humans. These findings highlight challenges MLLMs face in developing robust spatial representations from sequential visual input. Consequently, REM provides targeted metrics and diagnostics to foster improved spatial understanding in future models.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)