Score: 0

Bootstrapping Physics-Grounded Video Generation through VLM-Guided Iterative Self-Refinement

Published: November 25, 2025 | arXiv ID: 2511.20280v1

By: Yang Liu , Xilin Zhao , Peisong Wen and more

Potential Business Impact:

Makes videos follow real-world physics rules.

Business Areas:
Motion Capture Media and Entertainment, Video

Recent progress in video generation has led to impressive visual quality, yet current models still struggle to produce results that align with real-world physical principles. To this end, we propose an iterative self-refinement framework that leverages large language models and vision-language models to provide physics-aware guidance for video generation. Specifically, we introduce a multimodal chain-of-thought (MM-CoT) process that refines prompts based on feedback from physical inconsistencies, progressively enhancing generation quality. This method is training-free and plug-and-play, making it readily applicable to a wide range of video generation models. Experiments on the PhyIQ benchmark show that our method improves the Physics-IQ score from 56.31 to 62.38. We hope this work serves as a preliminary exploration of physics-consistent video generation and may offer insights for future research.

Page Count
4 pages

Category
Computer Science:
CV and Pattern Recognition