One-Shot Real-World Demonstration Synthesis for Scalable Bimanual Manipulation
By: Huayi Zhou, Kui Jia
Learning dexterous bimanual manipulation policies critically depends on large-scale, high-quality demonstrations, yet current paradigms face inherent trade-offs: teleoperation provides physically grounded data but is prohibitively labor-intensive, while simulation-based synthesis scales efficiently but suffers from sim-to-real gaps. We present BiDemoSyn, a framework that synthesizes contact-rich, physically feasible bimanual demonstrations from a single real-world example. The key idea is to decompose tasks into invariant coordination blocks and variable, object-dependent adjustments, then adapt them through vision-guided alignment and lightweight trajectory optimization. This enables the generation of thousands of diverse and feasible demonstrations within several hour, without repeated teleoperation or reliance on imperfect simulation. Across six dual-arm tasks, we show that policies trained on BiDemoSyn data generalize robustly to novel object poses and shapes, significantly outperforming recent baselines. By bridging the gap between efficiency and real-world fidelity, BiDemoSyn provides a scalable path toward practical imitation learning for complex bimanual manipulation without compromising physical grounding.
Similar Papers
DexMan: Learning Bimanual Dexterous Manipulation from Human and Generated Videos
Robotics
Robots learn to do tasks by watching videos.
Dexterous Manipulation Transfer via Progressive Kinematic-Dynamic Alignment
Robotics
Robots copy human hand moves from videos.
Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration
Robotics
Robots learn to do tasks from watching videos.