PhysWorld: From Real Videos to World Models of Deformable Objects via Physics-Aware Demonstration Synthesis
By: Yu Yang , Zhilu Zhang , Xiang Zhang and more
Potential Business Impact:
Teaches robots to predict how things move.
Interactive world models that simulate object dynamics are crucial for robotics, VR, and AR. However, it remains a significant challenge to learn physics-consistent dynamics models from limited real-world video data, especially for deformable objects with spatially-varying physical properties. To overcome the challenge of data scarcity, we propose PhysWorld, a novel framework that utilizes a simulator to synthesize physically plausible and diverse demonstrations to learn efficient world models. Specifically, we first construct a physics-consistent digital twin within MPM simulator via constitutive model selection and global-to-local optimization of physical properties. Subsequently, we apply part-aware perturbations to the physical properties and generate various motion patterns for the digital twin, synthesizing extensive and diverse demonstrations. Finally, using these demonstrations, we train a lightweight GNN-based world model that is embedded with physical properties. The real video can be used to further refine the physical properties. PhysWorld achieves accurate and fast future predictions for various deformable objects, and also generalizes well to novel interactions. Experiments show that PhysWorld has competitive performance while enabling inference speeds 47 times faster than the recent state-of-the-art method, i.e., PhysTwin.
Similar Papers
Robot Learning from a Physical World Model
Robotics
Teaches robots to do tasks by watching generated videos.
PhysChoreo: Physics-Controllable Video Generation with Part-Aware Semantic Grounding
CV and Pattern Recognition
Makes videos move realistically from one picture.
GeoWorld: Unlocking the Potential of Geometry Models to Facilitate High-Fidelity 3D Scene Generation
CV and Pattern Recognition
Creates realistic 3D worlds from pictures.