Score: 0

AnchorDream: Repurposing Video Diffusion for Embodiment-Aware Robot Data Synthesis

Published: December 12, 2025 | arXiv ID: 2512.11797v1

By: Junjie Ye , Rong Xue , Basile Van Hoorick and more

Potential Business Impact:

Makes robots learn new skills from few examples.

Business Areas:
Virtual Reality Hardware, Software

The collection of large-scale and diverse robot demonstrations remains a major bottleneck for imitation learning, as real-world data acquisition is costly and simulators offer limited diversity and fidelity with pronounced sim-to-real gaps. While generative models present an attractive solution, existing methods often alter only visual appearances without creating new behaviors, or suffer from embodiment inconsistencies that yield implausible motions. To address these limitations, we introduce AnchorDream, an embodiment-aware world model that repurposes pretrained video diffusion models for robot data synthesis. AnchorDream conditions the diffusion process on robot motion renderings, anchoring the embodiment to prevent hallucination while synthesizing objects and environments consistent with the robot's kinematics. Starting from only a handful of human teleoperation demonstrations, our method scales them into large, diverse, high-quality datasets without requiring explicit environment modeling. Experiments show that the generated data leads to consistent improvements in downstream policy learning, with relative gains of 36.4% in simulator benchmarks and nearly double performance in real-world studies. These results suggest that grounding generative world models in robot motion provides a practical path toward scaling imitation learning.

Page Count
9 pages

Category
Computer Science:
Robotics