VIRAL: Visual Sim-to-Real at Scale for Humanoid Loco-Manipulation
By: Tairan He , Zi Wang , Haoru Xue and more
Potential Business Impact:
Robots learn to walk and move objects by watching.
A key barrier to the real-world deployment of humanoid robots is the lack of autonomous loco-manipulation skills. We introduce VIRAL, a visual sim-to-real framework that learns humanoid loco-manipulation entirely in simulation and deploys it zero-shot to real hardware. VIRAL follows a teacher-student design: a privileged RL teacher, operating on full state, learns long-horizon loco-manipulation using a delta action space and reference state initialization. A vision-based student policy is then distilled from the teacher via large-scale simulation with tiled rendering, trained with a mixture of online DAgger and behavior cloning. We find that compute scale is critical: scaling simulation to tens of GPUs (up to 64) makes both teacher and student training reliable, while low-compute regimes often fail. To bridge the sim-to-real gap, VIRAL combines large-scale visual domain randomization over lighting, materials, camera parameters, image quality, and sensor delays--with real-to-sim alignment of the dexterous hands and cameras. Deployed on a Unitree G1 humanoid, the resulting RGB-based policy performs continuous loco-manipulation for up to 54 cycles, generalizing to diverse spatial and appearance variations without any real-world fine-tuning, and approaching expert-level teleoperation performance. Extensive ablations dissect the key design choices required to make RGB-based humanoid loco-manipulation work in practice.
Similar Papers
Opening the Sim-to-Real Door for Humanoid Pixel-to-Action Policy Transfer
Robotics
Robots learn to open doors just by watching.
VisualMimic: Visual Humanoid Loco-Manipulation via Motion Tracking and Generation
Robotics
Robots learn to move and grab like humans.
Learning Sim-to-Real Humanoid Locomotion in 15 Minutes
Robotics
Teaches robots to walk in minutes.