Learning Sim-to-Real Humanoid Locomotion in 15 Minutes
By: Younggyo Seo , Carmelo Sferrazza , Juyue Chen and more
Potential Business Impact:
Teaches robots to walk in minutes.
Massively parallel simulation has reduced reinforcement learning (RL) training time for robots from days to minutes. However, achieving fast and reliable sim-to-real RL for humanoid control remains difficult due to the challenges introduced by factors such as high dimensionality and domain randomization. In this work, we introduce a simple and practical recipe based on off-policy RL algorithms, i.e., FastSAC and FastTD3, that enables rapid training of humanoid locomotion policies in just 15 minutes with a single RTX 4090 GPU. Our simple recipe stabilizes off-policy RL algorithms at massive scale with thousands of parallel environments through carefully tuned design choices and minimalist reward functions. We demonstrate rapid end-to-end learning of humanoid locomotion controllers on Unitree G1 and Booster T1 robots under strong domain randomization, e.g., randomized dynamics, rough terrain, and push perturbations, as well as fast training of whole-body human-motion tracking policies. We provide videos and open-source implementation at: https://younggyo.me/fastsac-humanoid.
Similar Papers
Opening the Sim-to-Real Door for Humanoid Pixel-to-Action Policy Transfer
Robotics
Robots learn to open doors just by watching.
Robot Trains Robot: Automatic Real-World Policy Adaptation and Learning for Humanoids
Robotics
Robot arm teaches robot to walk and move.
Robot Trains Robot: Automatic Real-World Policy Adaptation and Learning for Humanoids
Robotics
Robot teacher helps robot learn to walk.