Unified Video Action Model
By: Shuang Li , Yihuai Gao , Dorsa Sadigh and more
Potential Business Impact:
Robots learn to move and see better together.
A unified video and action model holds significant promise for robotics, where videos provide rich scene information for action prediction, and actions provide dynamics information for video prediction. However, effectively combining video generation and action prediction remains challenging, and current video generation-based methods struggle to match the performance of direct policy learning in action accuracy and inference speed. To bridge this gap, we introduce the Unified Video Action model (UVA), which jointly optimizes video and action predictions to achieve both high accuracy and efficient action inference. The key lies in learning a joint video-action latent representation and decoupling video-action decoding. The joint latent representation bridges the visual and action domains, effectively modeling the relationship between video and action sequences. Meanwhile, the decoupled decoding, powered by two lightweight diffusion heads, enables high-speed action inference by bypassing video generation during inference. Such a unified framework further enables versatile functionality through masked input training. By selectively masking actions or videos, a single model can tackle diverse tasks beyond policy learning, such as forward and inverse dynamics modeling and video generation. Via an extensive set of experiments, we demonstrate that UVA can serve as a general-purpose solution for a wide range of robotics tasks, such as policy learning, forward/inverse dynamics and video observation prediction, without compromising performance compared to methods tailored for specific applications. Results are best viewed on https://unified-video-action-model.github.io/.
Similar Papers
Unified World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets
Robotics
Teaches robots by watching videos, not just experts.
UniVLA: Learning to Act Anywhere with Task-centric Latent Actions
Robotics
Robots learn to do more tasks with less data.
XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations
Robotics
Teaches robots to do many tasks from watching.