Reinforced Reasoning for Embodied Planning
By: Di Wu , Jiaxin Fan , Junzhe Zang and more
Potential Business Impact:
Teaches robots to plan and act in new places.
Embodied planning requires agents to make coherent multi-step decisions based on dynamic visual observations and natural language goals. While recent vision-language models (VLMs) excel at static perception tasks, they struggle with the temporal reasoning, spatial understanding, and commonsense grounding needed for planning in interactive environments. In this work, we introduce a reinforcement fine-tuning framework that brings R1-style reasoning enhancement into embodied planning. We first distill a high-quality dataset from a powerful closed-source model and perform supervised fine-tuning (SFT) to equip the model with structured decision-making priors. We then design a rule-based reward function tailored to multi-step action quality and optimize the policy via Generalized Reinforced Preference Optimization (GRPO). Our approach is evaluated on Embench, a recent benchmark for interactive embodied tasks, covering both in-domain and out-of-domain scenarios. Experimental results show that our method significantly outperforms models of similar or larger scale, including GPT-4o-mini and 70B+ open-source baselines, and exhibits strong generalization to unseen environments. This work highlights the potential of reinforcement-driven reasoning to advance long-horizon planning in embodied AI.
Similar Papers
RoboGPT-R1: Enhancing Robot Planning with Reinforcement Learning
Artificial Intelligence
Robots learn to follow complex instructions better.
RoboGPT-R1: Enhancing Robot Planning with Reinforcement Learning
Artificial Intelligence
Robots learn to follow complex instructions better.
Embodied-R: Collaborative Framework for Activating Embodied Spatial Reasoning in Foundation Models via Reinforcement Learning
Artificial Intelligence
Helps computers understand space by watching videos.