Vid2World: Crafting Video Diffusion Models to Interactive World Models
By: Siqiao Huang , Jialong Wu , Qixing Zhou and more
Potential Business Impact:
Lets robots learn to do tasks by watching videos.
World models, which predict transitions based on history observation and action sequences, have shown great promise in improving data efficiency for sequential decision making. However, existing world models often require extensive domain-specific training and still produce low-fidelity, coarse predictions, limiting their applicability in complex environments. In contrast, video diffusion models trained on large, internet-scale datasets have demonstrated impressive capabilities in generating high-quality videos that capture diverse real-world dynamics. In this work, we present Vid2World, a general approach for leveraging and transferring pre-trained video diffusion models into interactive world models. To bridge the gap, Vid2World performs casualization of a pre-trained video diffusion model by crafting its architecture and training objective to enable autoregressive generation. Furthermore, it introduces a causal action guidance mechanism to enhance action controllability in the resulting interactive world model. Extensive experiments in robot manipulation and game simulation domains show that our method offers a scalable and effective approach for repurposing highly capable video diffusion models to interactive world models.
Similar Papers
Learning World Models for Interactive Video Generation
CV and Pattern Recognition
Makes videos that stay real and make sense.
Unified World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets
Robotics
Teaches robots by watching videos, not just experts.
Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model
CV and Pattern Recognition
Makes videos that change instantly with your actions.