Adversarial Locomotion and Motion Imitation for Humanoid Policy Learning
By: Jiyuan Shi , Xinzhe Liu , Dewei Wang and more
Potential Business Impact:
Robots move better by splitting body jobs.
Humans exhibit diverse and expressive whole-body movements. However, attaining human-like whole-body coordination in humanoid robots remains challenging, as conventional approaches that mimic whole-body motions often neglect the distinct roles of upper and lower body. This oversight leads to computationally intensive policy learning and frequently causes robot instability and falls during real-world execution. To address these issues, we propose Adversarial Locomotion and Motion Imitation (ALMI), a novel framework that enables adversarial policy learning between upper and lower body. Specifically, the lower body aims to provide robust locomotion capabilities to follow velocity commands while the upper body tracks various motions. Conversely, the upper-body policy ensures effective motion tracking when the robot executes velocity-based movements. Through iterative updates, these policies achieve coordinated whole-body control, which can be extended to loco-manipulation tasks with teleoperation systems. Extensive experiments demonstrate that our method achieves robust locomotion and precise motion tracking in both simulation and on the full-size Unitree H1 robot. Additionally, we release a large-scale whole-body motion control dataset featuring high-quality episodic trajectories from MuJoCo simulations deployable on real robots. The project page is https://almi-humanoid.github.io.
Similar Papers
One-shot Humanoid Whole-body Motion Learning
Robotics
Robots learn new moves from just one example.
A Whole-Body Motion Imitation Framework from Human Data for Full-Size Humanoid Robot
Robotics
Robots copy human moves, staying balanced.
Latent Conditioned Loco-Manipulation Using Motion Priors
Robotics
Robots learn many moves by watching and copying.