Robot Learning from a Physical World Model
By: Jiageng Mao , Sicheng He , Hao-Ning Wu and more
Potential Business Impact:
Teaches robots to do tasks by watching generated videos.
We introduce PhysWorld, a framework that enables robot learning from video generation through physical world modeling. Recent video generation models can synthesize photorealistic visual demonstrations from language commands and images, offering a powerful yet underexplored source of training signals for robotics. However, directly retargeting pixel motions from generated videos to robots neglects physics, often resulting in inaccurate manipulations. PhysWorld addresses this limitation by coupling video generation with physical world reconstruction. Given a single image and a task command, our method generates task-conditioned videos and reconstructs the underlying physical world from the videos, and the generated video motions are grounded into physically accurate actions through object-centric residual reinforcement learning with the physical world model. This synergy transforms implicit visual guidance into physically executable robotic trajectories, eliminating the need for real robot data collection and enabling zero-shot generalizable robotic manipulation. Experiments on diverse real-world tasks demonstrate that PhysWorld substantially improves manipulation accuracy compared to previous approaches. Visit \href{https://pointscoder.github.io/PhysWorld_Web/}{the project webpage} for details.
Similar Papers
PhysWorld: From Real Videos to World Models of Deformable Objects via Physics-Aware Demonstration Synthesis
CV and Pattern Recognition
Teaches robots to predict how things move.
PhysicalAgent: Towards General Cognitive Robotics with Foundation World Models
Robotics
Robots learn to do tasks by watching videos.
WoW: Towards a World omniscient World model Through Embodied Interaction
Robotics
Robots learn real-world physics by doing.