Learning Generalizable Hand-Object Tracking from Synthetic Demonstrations
By: Yinhuai Wang , Runyi Yu , Hok Wai Tsui and more
We present a system for learning generalizable hand-object tracking controllers purely from synthetic data, without requiring any human demonstrations. Our approach makes two key contributions: (1) HOP, a Hand-Object Planner, which can synthesize diverse hand-object trajectories; and (2) HOT, a Hand-Object Tracker that bridges synthetic-to-physical transfer through reinforcement learning and interaction imitation learning, delivering a generalizable controller conditioned on target hand-object states. Our method extends to diverse object shapes and hand morphologies. Through extensive evaluations, we show that our approach enables dexterous hands to track challenging, long-horizon sequences including object re-arrangement and agile in-hand reorientation. These results represent a significant step toward scalable foundation controllers for manipulation that can learn entirely from synthetic data, breaking the data bottleneck that has long constrained progress in dexterous manipulation.
Similar Papers
Learning to Transfer Human Hand Skills for Robot Manipulations
Robotics
Teaches robots to copy human hand movements.
MobileH2R: Learning Generalizable Human to Mobile Robot Handover Exclusively from Scalable and Diverse Synthetic Data
Robotics
Robot learns to catch things from people anywhere.
Toward Human-Robot Teaming: Learning Handover Behaviors from 3D Scenes
Robotics
Robots learn to grab objects from people.