PhysChoreo: Physics-Controllable Video Generation with Part-Aware Semantic Grounding
By: Haoze Zhang , Tianyu Huang , Zichen Wan and more
Potential Business Impact:
Makes videos move realistically from one picture.
While recent video generation models have achieved significant visual fidelity, they often suffer from the lack of explicit physical controllability and plausibility. To address this, some recent studies attempted to guide the video generation with physics-based rendering. However, these methods face inherent challenges in accurately modeling complex physical properties and effectively control ling the resulting physical behavior over extended temporal sequences. In this work, we introduce PhysChoreo, a novel framework that can generate videos with diverse controllability and physical realism from a single image. Our method consists of two stages: first, it estimates the static initial physical properties of all objects in the image through part-aware physical property reconstruction. Then, through temporally instructed and physically editable simulation, it synthesizes high-quality videos with rich dynamic behaviors and physical realism. Experimental results show that PhysChoreo can generate videos with rich behaviors and physical realism, outperforming state-of-the-art methods on multiple evaluation metrics.
Similar Papers
PhysCtrl: Generative Physics for Controllable and Physics-Grounded Video Generation
CV and Pattern Recognition
Makes videos move realistically, like real objects.
Bootstrapping Physics-Grounded Video Generation through VLM-Guided Iterative Self-Refinement
CV and Pattern Recognition
Makes videos follow real-world physics rules.
Planning with Sketch-Guided Verification for Physics-Aware Video Generation
CV and Pattern Recognition
Makes videos move more realistically and smoothly.