AnchorDream: Repurposing Video Diffusion for Embodiment-Aware Robot Data Synthesis
By: Junjie Ye , Rong Xue , Basile Van Hoorick and more
Potential Business Impact:
Makes robots learn new skills from few examples.
The collection of large-scale and diverse robot demonstrations remains a major bottleneck for imitation learning, as real-world data acquisition is costly and simulators offer limited diversity and fidelity with pronounced sim-to-real gaps. While generative models present an attractive solution, existing methods often alter only visual appearances without creating new behaviors, or suffer from embodiment inconsistencies that yield implausible motions. To address these limitations, we introduce AnchorDream, an embodiment-aware world model that repurposes pretrained video diffusion models for robot data synthesis. AnchorDream conditions the diffusion process on robot motion renderings, anchoring the embodiment to prevent hallucination while synthesizing objects and environments consistent with the robot's kinematics. Starting from only a handful of human teleoperation demonstrations, our method scales them into large, diverse, high-quality datasets without requiring explicit environment modeling. Experiments show that the generated data leads to consistent improvements in downstream policy learning, with relative gains of 36.4% in simulator benchmarks and nearly double performance in real-world studies. These results suggest that grounding generative world models in robot motion provides a practical path toward scaling imitation learning.
Similar Papers
ManipDreamer3D : Synthesizing Plausible Robotic Manipulation Video with Occupancy-aware 3D Trajectory
Robotics
Robots learn to move objects from pictures and words.
X-Humanoid: Robotize Human Videos to Generate Humanoid Videos at Scale
CV and Pattern Recognition
Turns human videos into robot training videos.
From Generated Human Videos to Physically Plausible Robot Trajectories
Robotics
Robots copy human moves from fake videos.