PhysVideoGenerator: Towards Physically Aware Video Generation via Latent Physics Guidance
By: Siddarth Nilol Kundur Satish , Devesh Jaiswal , Hongyu Chen and more
Potential Business Impact:
Makes videos look real, with correct physics.
Current video generation models produce high-quality aesthetic videos but often struggle to learn representations of real-world physics dynamics, resulting in artifacts such as unnatural object collisions, inconsistent gravity, and temporal flickering. In this work, we propose PhysVideoGenerator, a proof-of-concept framework that explicitly embeds a learnable physics prior into the video generation process. We introduce a lightweight predictor network, PredictorP, which regresses high-level physical features extracted from a pre-trained Video Joint Embedding Predictive Architecture (V-JEPA 2) directly from noisy diffusion latents. These predicted physics tokens are injected into the temporal attention layers of a DiT-based generator (Latte) via a dedicated cross-attention mechanism. Our primary contribution is demonstrating the technical feasibility of this joint training paradigm: we show that diffusion latents contain sufficient information to recover V-JEPA 2 physical representations, and that multi-task optimization remains stable over training. This report documents the architectural design, technical challenges, and validation of training stability, establishing a foundation for future large-scale evaluation of physics-aware generative models.
Similar Papers
Improving the Physics of Video Generation with VJEPA-2 Reward Signal
CV and Pattern Recognition
Makes computer videos follow real-world physics rules.
PhysChoreo: Physics-Controllable Video Generation with Part-Aware Semantic Grounding
CV and Pattern Recognition
Makes videos move realistically from one picture.
PhyGDPO: Physics-Aware Groupwise Direct Preference Optimization for Physically Consistent Text-to-Video Generation
CV and Pattern Recognition
Makes videos follow real-world physics rules.