Score: 4

NewtonGen: Physics-Consistent and Controllable Text-to-Video Generation via Neural Newtonian Dynamics

Published: September 25, 2025 | arXiv ID: 2509.21309v1

By: Yu Yuan , Xijun Wang , Tharindu Wickremasinghe and more

BigTech Affiliations: Samsung

Potential Business Impact:

Makes videos move like real life.

Business Areas:
Motion Capture Media and Entertainment, Video

A primary bottleneck in large-scale text-to-video generation today is physical consistency and controllability. Despite recent advances, state-of-the-art models often produce unrealistic motions, such as objects falling upward, or abrupt changes in velocity and direction. Moreover, these models lack precise parameter control, struggling to generate physically consistent dynamics under different initial conditions. We argue that this fundamental limitation stems from current models learning motion distributions solely from appearance, while lacking an understanding of the underlying dynamics. In this work, we propose NewtonGen, a framework that integrates data-driven synthesis with learnable physical principles. At its core lies trainable Neural Newtonian Dynamics (NND), which can model and predict a variety of Newtonian motions, thereby injecting latent dynamical constraints into the video generation process. By jointly leveraging data priors and dynamical guidance, NewtonGen enables physically consistent video synthesis with precise parameter control.

Country of Origin
πŸ‡°πŸ‡· πŸ‡ΊπŸ‡Έ United States, South Korea

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
CV and Pattern Recognition