Bootstrapping Physics-Grounded Video Generation through VLM-Guided Iterative Self-Refinement
By: Yang Liu , Xilin Zhao , Peisong Wen and more
Potential Business Impact:
Makes videos follow real-world physics rules.
Recent progress in video generation has led to impressive visual quality, yet current models still struggle to produce results that align with real-world physical principles. To this end, we propose an iterative self-refinement framework that leverages large language models and vision-language models to provide physics-aware guidance for video generation. Specifically, we introduce a multimodal chain-of-thought (MM-CoT) process that refines prompts based on feedback from physical inconsistencies, progressively enhancing generation quality. This method is training-free and plug-and-play, making it readily applicable to a wide range of video generation models. Experiments on the PhyIQ benchmark show that our method improves the Physics-IQ score from 56.31 to 62.38. We hope this work serves as a preliminary exploration of physics-consistent video generation and may offer insights for future research.
Similar Papers
Video Finetuning Improves Reasoning Between Frames
CV and Pattern Recognition
Helps computers understand video stories better.
PhyVLLM: Physics-Guided Video Language Model with Motion-Appearance Disentanglement
CV and Pattern Recognition
Helps computers understand how things move in videos.
PhysChoreo: Physics-Controllable Video Generation with Part-Aware Semantic Grounding
CV and Pattern Recognition
Makes videos move realistically from one picture.