MiMo-Embodied: X-Embodied Foundation Model Technical Report
By: Xiaoshuai Hao , Lei Zhou , Zhijian Huang and more
Potential Business Impact:
Teaches robots and cars to learn together.
We open-source MiMo-Embodied, the first cross-embodied foundation model to successfully integrate and achieve state-of-the-art performance in both Autonomous Driving and Embodied AI. MiMo-Embodied sets new records across 17 embodied AI benchmarks in Task Planning, Affordance Prediction and Spatial Understanding, while also excelling in 12 autonomous driving benchmarks across Environmental Perception, Status Prediction, and Driving Planning. Across these tasks, MiMo-Embodied significantly outperforms existing open-source, closed-source, and specialized baselines. Our results indicate that through multi-stage learning, curated data construction, and CoT/RL fine-tuning, these two domains exhibit strong positive transfer and mutually reinforce one another. We provide a detailed analysis of our model design and training methodologies to facilitate further research. Code and models are available at https://github.com/XiaomiMiMo/MiMo-Embodied.
Similar Papers
MiMo-VL Technical Report
Computation and Language
Helps computers understand pictures and words better.
Embodied Navigation Foundation Model
Robotics
Robots learn to move anywhere, doing many jobs.
Autonomous Embodied Agents: When Robotics Meets Deep Learning Reasoning
Robotics
Robots learn to do tasks in new places.