EMMOE: A Comprehensive Benchmark for Embodied Mobile Manipulation in Open Environments
By: Dongping Li , Tielong Cai , Tianci Tang and more
Potential Business Impact:
Robots understand and do everyday tasks from your words.
Developing autonomous home robots controlled by natural language has long been a pursuit of humanity. While advancements in large language models (LLMs) and embodied intelligence make this goal closer, several challenges persist: the lack of a unified benchmark for more complex robot tasks, limited evaluation methods and metrics, data incompatibility between LLMs and mobile manipulation trajectories. To address these issues, we propose Embodied Mobile Manipulation in Open Environments (EMMOE), a benchmark that requires agents to interpret user instructions and execute long-horizon everyday tasks in continuous space. EMMOE seamlessly integrates high-level and low-level embodied tasks into a unified framework, along with three new metrics for more diverse assessment. Additionally, we collect~\dataset, which features in various task attributes, detailed process annotations, re-plans after failures, and two sub-datasets for LLM training. Furthermore, we design~\model, a sophisticated agent system consists of LLM with Direct Preference Optimization (DPO), light weighted navigation and manipulation models, and multiple error detection mechanisms. Finally, we demonstrate~\model's performance and evaluations of different models and policies.
Similar Papers
EMMA: Scaling Mobile Manipulation via Egocentric Human Data
Robotics
Teaches robots to do tasks using human moves.
MiMo-Embodied: X-Embodied Foundation Model Technical Report
Robotics
Teaches robots and cars to learn together.
Scene-Adaptive Motion Planning with Explicit Mixture of Experts and Interaction-Oriented Optimization
Robotics
Helps self-driving cars navigate city streets safely.