Mitty: Diffusion-based Human-to-Robot Video Generation
By: Yiren Song , Cheng Liu , Weijia Mao and more
Learning directly from human demonstration videos is a key milestone toward scalable and generalizable robot learning. Yet existing methods rely on intermediate representations such as keypoints or trajectories, introducing information loss and cumulative errors that harm temporal and visual consistency. We present Mitty, a Diffusion Transformer that enables video In-Context Learning for end-to-end Human2Robot video generation. Built on a pretrained video diffusion model, Mitty leverages strong visual-temporal priors to translate human demonstrations into robot-execution videos without action labels or intermediate abstractions. Demonstration videos are compressed into condition tokens and fused with robot denoising tokens through bidirectional attention during diffusion. To mitigate paired-data scarcity, we also develop an automatic synthesis pipeline that produces high-quality human-robot pairs from large egocentric datasets. Experiments on Human2Robot and EPIC-Kitchens show that Mitty delivers state-of-the-art results, strong generalization to unseen environments, and new insights for scalable robot learning from human observations.
Similar Papers
From Generated Human Videos to Physically Plausible Robot Trajectories
Robotics
Robots copy human moves from fake videos.
X-Diffusion: Training Diffusion Policies on Cross-Embodiment Human Demonstrations
Robotics
Teaches robots to copy human actions better.
Playmate2: Training-Free Multi-Character Audio-Driven Animation via Diffusion Transformer with Reward Feedback
CV and Pattern Recognition
Makes videos of people talking from sound.