Embodied-R: Collaborative Framework for Activating Embodied Spatial Reasoning in Foundation Models via Reinforcement Learning
By: Baining Zhao , Ziyou Wang , Jianjie Fang and more
Potential Business Impact:
Helps computers understand space by watching videos.
Humans can perceive and reason about spatial relationships from sequential visual observations, such as egocentric video streams. However, how pretrained models acquire such abilities, especially high-level reasoning, remains unclear. This paper introduces Embodied-R, a collaborative framework combining large-scale Vision-Language Models (VLMs) for perception and small-scale Language Models (LMs) for reasoning. Using Reinforcement Learning (RL) with a novel reward system considering think-answer logical consistency, the model achieves slow-thinking capabilities with limited computational resources. After training on only 5k embodied video samples, Embodied-R with a 3B LM matches state-of-the-art multimodal reasoning models (OpenAI-o1, Gemini-2.5-pro) on both in-distribution and out-of-distribution embodied spatial reasoning tasks. Embodied-R also exhibits emergent thinking patterns such as systematic analysis and contextual integration. We further explore research questions including response length, training on VLM, strategies for reward design, and differences in model generalization after SFT (Supervised Fine-Tuning) and RL training.
Similar Papers
Robot-R1: Reinforcement Learning for Enhanced Embodied Reasoning in Robotics
Robotics
Teaches robots to do tasks better.
Reinforced Reasoning for Embodied Planning
Artificial Intelligence
Teaches robots to plan and act in new places.
EmbRACE-3K: Embodied Reasoning and Action in Complex Environments
CV and Pattern Recognition
Teaches robots to understand and act in real places.