Score: 0

\textsc{Gen2Real}: Towards Demo-Free Dexterous Manipulation by Harnessing Generated Video

Published: September 16, 2025 | arXiv ID: 2509.14178v1

By: Kai Ye , Yuhang Wu , Shuyuan Hu and more

Potential Business Impact:

Robots learn to grab things from watching videos.

Business Areas:
Motion Capture Media and Entertainment, Video

Dexterous manipulation remains a challenging robotics problem, largely due to the difficulty of collecting extensive human demonstrations for learning. In this paper, we introduce \textsc{Gen2Real}, which replaces costly human demos with one generated video and drives robot skill from it: it combines demonstration generation that leverages video generation with pose and depth estimation to yield hand-object trajectories, trajectory optimization that uses Physics-aware Interaction Optimization Model (PIOM) to impose physics consistency, and demonstration learning that retargets human motions to a robot hand and stabilizes control with an anchor-based residual Proximal Policy Optimization (PPO) policy. Using only generated videos, the learned policy achieves a 77.3\% success rate on grasping tasks in simulation and demonstrates coherent executions on a real robot. We also conduct ablation studies to validate the contribution of each component and demonstrate the ability to directly specify tasks using natural language, highlighting the flexibility and robustness of \textsc{Gen2Real} in generalizing grasping skills from imagined videos to real-world execution.

Country of Origin
🇭🇰 Hong Kong

Page Count
9 pages

Category
Computer Science:
Robotics