Score: 0

Goal Force: Teaching Video Models To Accomplish Physics-Conditioned Goals

Published: January 9, 2026 | arXiv ID: 2601.05848v1

By: Nate Gillman , Yinghua Zhou , Zitian Tang and more

Potential Business Impact:

Teaches robots to push objects with precision.

Business Areas:
Motion Capture Media and Entertainment, Video

Recent advancements in video generation have enabled the development of ``world models'' capable of simulating potential futures for robotics and planning. However, specifying precise goals for these models remains a challenge; text instructions are often too abstract to capture physical nuances, while target images are frequently infeasible to specify for dynamic tasks. To address this, we introduce Goal Force, a novel framework that allows users to define goals via explicit force vectors and intermediate dynamics, mirroring how humans conceptualize physical tasks. We train a video generation model on a curated dataset of synthetic causal primitives-such as elastic collisions and falling dominos-teaching it to propagate forces through time and space. Despite being trained on simple physics data, our model exhibits remarkable zero-shot generalization to complex, real-world scenarios, including tool manipulation and multi-object causal chains. Our results suggest that by grounding video generation in fundamental physical interactions, models can emerge as implicit neural physics simulators, enabling precise, physics-aware planning without reliance on external engines. We release all datasets, code, model weights, and interactive video demos at our project page.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition