MoE-Loco: Mixture of Experts for Multitask Locomotion
By: Runhan Huang , Shaoting Zhu , Yilun Du and more
Potential Business Impact:
Robots learn to walk on any surface.
We present MoE-Loco, a Mixture of Experts (MoE) framework for multitask locomotion for legged robots. Our method enables a single policy to handle diverse terrains, including bars, pits, stairs, slopes, and baffles, while supporting quadrupedal and bipedal gaits. Using MoE, we mitigate the gradient conflicts that typically arise in multitask reinforcement learning, improving both training efficiency and performance. Our experiments demonstrate that different experts naturally specialize in distinct locomotion behaviors, which can be leveraged for task migration and skill composition. We further validate our approach in both simulation and real-world deployment, showcasing its robustness and adaptability.
Similar Papers
Mixture-of-Experts for Personalized and Semantic-Aware Next Location Prediction
Artificial Intelligence
Predicts where people will go next, better.
MoMoE: A Mixture of Expert Agent Model for Financial Sentiment Analysis
Computational Engineering, Finance, and Science
Makes AI smarter by letting many AI parts work together.
Mixture of Experts in Large Language Models
Machine Learning (CS)
Makes smart computer programs learn faster and better.