X-Humanoid: Robotize Human Videos to Generate Humanoid Videos at Scale
By: Pei Yang , Hai Ci , Yiren Song and more
Potential Business Impact:
Turns human videos into robot training videos.
The advancement of embodied AI has unlocked significant potential for intelligent humanoid robots. However, progress in both Vision-Language-Action (VLA) models and world models is severely hampered by the scarcity of large-scale, diverse training data. A promising solution is to "robotize" web-scale human videos, which has been proven effective for policy training. However, these solutions mainly "overlay" robot arms to egocentric videos, which cannot handle complex full-body motions and scene occlusions in third-person videos, making them unsuitable for robotizing humans. To bridge this gap, we introduce X-Humanoid, a generative video editing approach that adapts the powerful Wan 2.2 model into a video-to-video structure and finetunes it for the human-to-humanoid translation task. This finetuning requires paired human-humanoid videos, so we designed a scalable data creation pipeline, turning community assets into 17+ hours of paired synthetic videos using Unreal Engine. We then apply our trained model to 60 hours of the Ego-Exo4D videos, generating and releasing a new large-scale dataset of over 3.6 million "robotized" humanoid video frames. Quantitative analysis and user studies confirm our method's superiority over existing baselines: 69% of users rated it best for motion consistency, and 62.1% for embodiment correctness.
Similar Papers
From Generated Human Videos to Physically Plausible Robot Trajectories
Robotics
Robots copy human moves from fake videos.
Masquerade: Learning from In-the-wild Human Videos using Data-Editing
Robotics
Teaches robots to do tasks using human videos.
Unveiling the Impact of Data and Model Scaling on High-Level Control for Humanoid Robots
Robotics
Teaches robots to move like humans from videos.