CoVAR: Co-generation of Video and Action for Robotic Manipulation via Multi-Modal Diffusion
By: Liudi Yang , Yang Bai , George Eskandar and more
We present a method to generate video-action pairs that follow text instructions, starting from an initial image observation and the robot's joint states. Our approach automatically provides action labels for video diffusion models, overcoming the common lack of action annotations and enabling their full use for robotic policy learning. Existing methods either adopt two-stage pipelines, which limit tightly coupled cross-modal information sharing, or rely on adapting a single-modal diffusion model for a joint distribution that cannot fully leverage pretrained video knowledge. To overcome these limitations, we (1) extend a pretrained video diffusion model with a parallel, dedicated action diffusion model that preserves pretrained knowledge, (2) introduce a Bridge Attention mechanism to enable effective cross-modal interaction, and (3) design an action refinement module to convert coarse actions into precise controls for low-resolution datasets. Extensive evaluations on multiple public benchmarks and real-world datasets demonstrate that our method generates higher-quality videos, more accurate actions, and significantly outperforms existing baselines, offering a scalable framework for leveraging large-scale video data for robotic learning.
Similar Papers
View-aware Cross-modal Distillation for Multi-view Action Recognition
CV and Pattern Recognition
Lets computers understand actions from different camera angles.
Real-Time Motion-Controllable Autoregressive Video Diffusion
CV and Pattern Recognition
Makes videos move exactly how you want, fast.
Video2Act: A Dual-System Video Diffusion Policy with Robotic Spatio-Motional Modeling
Robotics
Helps robots learn to move and grab things.