Score: 0

MoE-Loco: Mixture of Experts for Multitask Locomotion

Published: March 11, 2025 | arXiv ID: 2503.08564v2

By: Runhan Huang , Shaoting Zhu , Yilun Du and more

Potential Business Impact:

Robots learn to walk on any surface.

Business Areas:
Robotics Hardware, Science and Engineering, Software

We present MoE-Loco, a Mixture of Experts (MoE) framework for multitask locomotion for legged robots. Our method enables a single policy to handle diverse terrains, including bars, pits, stairs, slopes, and baffles, while supporting quadrupedal and bipedal gaits. Using MoE, we mitigate the gradient conflicts that typically arise in multitask reinforcement learning, improving both training efficiency and performance. Our experiments demonstrate that different experts naturally specialize in distinct locomotion behaviors, which can be leveraged for task migration and skill composition. We further validate our approach in both simulation and real-world deployment, showcasing its robustness and adaptability.

Country of Origin
🇨🇳 China

Page Count
9 pages

Category
Computer Science:
Robotics