Video Generators are Robot Policies
By: Junbang Liang , Pavel Tokmakov , Ruoshi Liu and more
Potential Business Impact:
Teaches robots new tasks using videos instead of instructions.
Despite tremendous progress in dexterous manipulation, current visuomotor policies remain fundamentally limited by two challenges: they struggle to generalize under perceptual or behavioral distribution shifts, and their performance is constrained by the size of human demonstration data. In this paper, we use video generation as a proxy for robot policy learning to address both limitations simultaneously. We propose Video Policy, a modular framework that combines video and action generation that can be trained end-to-end. Our results demonstrate that learning to generate videos of robot behavior allows for the extraction of policies with minimal demonstration data, significantly improving robustness and sample efficiency. Our method shows strong generalization to unseen objects, backgrounds, and tasks, both in simulation and the real world. We further highlight that task success is closely tied to the generated video, with action-free video data providing critical benefits for generalizing to novel tasks. By leveraging large-scale video generative models, we achieve superior performance compared to traditional behavior cloning, paving the way for more scalable and data-efficient robot policy learning.
Similar Papers
From Generated Human Videos to Physically Plausible Robot Trajectories
Robotics
Robots copy human moves from fake videos.
Scalable Policy Evaluation with Video World Models
Robotics
Lets robots practice tasks safely without real robots.
LuciBot: Automated Robot Policy Learning from Generated Videos
CV and Pattern Recognition
Teaches robots to do hard jobs by watching videos.