VDAWorld: World Modelling via VLM-Directed Abstraction and Simulation
By: Felix O'Mahony, Roberto Cipolla, Ayush Tewari
Generative video models, a leading approach to world modeling, face fundamental limitations. They often violate physical and logical rules, lack interactivity, and operate as opaque black boxes ill-suited for building structured, queryable worlds. To overcome these challenges, we propose a new paradigm focused on distilling an image caption pair into a tractable, abstract representation optimized for simulation. We introduce VDAWorld, a framework where a Vision-Language Model (VLM) acts as an intelligent agent to orchestrate this process. The VLM autonomously constructs a grounded (2D or 3D) scene representation by selecting from a suite of vision tools, and accordingly chooses a compatible physics simulator (e.g., rigid body, fluid) to act upon it. VDAWorld can then infer latent dynamics from the static scene to predict plausible future states. Our experiments show that this combination of intelligent abstraction and adaptive simulation results in a versatile world model capable of producing high quality simulations across a wide range of dynamic scenarios.
Similar Papers
Planning with Reasoning using Vision Language World Model
Artificial Intelligence
Helps robots understand and plan tasks from videos.
Simulating the Visual World with Artificial Intelligence: A Roadmap
Artificial Intelligence
Creates realistic videos that act like real worlds.
Planning with Reasoning using Vision Language World Model
Artificial Intelligence
Helps robots understand and plan actions in videos.