NewtonGen: Physics-Consistent and Controllable Text-to-Video Generation via Neural Newtonian Dynamics
By: Yu Yuan , Xijun Wang , Tharindu Wickremasinghe and more
Potential Business Impact:
Makes videos move like real life.
A primary bottleneck in large-scale text-to-video generation today is physical consistency and controllability. Despite recent advances, state-of-the-art models often produce unrealistic motions, such as objects falling upward, or abrupt changes in velocity and direction. Moreover, these models lack precise parameter control, struggling to generate physically consistent dynamics under different initial conditions. We argue that this fundamental limitation stems from current models learning motion distributions solely from appearance, while lacking an understanding of the underlying dynamics. In this work, we propose NewtonGen, a framework that integrates data-driven synthesis with learnable physical principles. At its core lies trainable Neural Newtonian Dynamics (NND), which can model and predict a variety of Newtonian motions, thereby injecting latent dynamical constraints into the video generation process. By jointly leveraging data priors and dynamical guidance, NewtonGen enables physically consistent video synthesis with precise parameter control.
Similar Papers
PhysCtrl: Generative Physics for Controllable and Physics-Grounded Video Generation
CV and Pattern Recognition
Makes videos move realistically, like real objects.
MoReGen: Multi-Agent Motion-Reasoning Engine for Code-based Text-to-Video Synthesis
CV and Pattern Recognition
Makes videos follow real-world physics rules.
PhysChoreo: Physics-Controllable Video Generation with Part-Aware Semantic Grounding
CV and Pattern Recognition
Makes videos move realistically from one picture.