mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs
By: Jonas Pai , Liam Achenbach , Victoriano Montesinos and more
Potential Business Impact:
Teaches robots to move by watching videos.
Prevailing Vision-Language-Action Models (VLAs) for robotic manipulation are built upon vision-language backbones pretrained on large-scale, but disconnected static web data. As a result, despite improved semantic generalization, the policy must implicitly infer complex physical dynamics and temporal dependencies solely from robot trajectories. This reliance creates an unsustainable data burden, necessitating continuous, large-scale expert data collection to compensate for the lack of innate physical understanding. We contend that while vision-language pretraining effectively captures semantic priors, it remains blind to physical causality. A more effective paradigm leverages video to jointly capture semantics and visual dynamics during pretraining, thereby isolating the remaining task of low-level control. To this end, we introduce \model, a novel Video-Action Model (VAM) that pairs a pretrained Internet-scale video model with a flow matching-based action decoder conditioned on its latent representations. The decoder serves as an Inverse Dynamics Model (IDM), generating low-level robot actions from the latent representation of video-space action plans. Our extensive evaluation shows that our approach achieves state-of-the-art performance on simulated and real-world robotic manipulation tasks, improving sample efficiency by 10x and convergence speed by 2x compared to traditional VLA architectures.
Similar Papers
MiVLA: Towards Generalizable Vision-Language-Action Model with Human-Robot Mutual Imitation Pre-training
Robotics
Robots learn to do tasks by watching humans.
VideoVLA: Video Generators Can Be Generalizable Robot Manipulators
Robotics
Robots learn new tasks by imagining future outcomes.
See Once, Then Act: Vision-Language-Action Model with Task Learning from One-Shot Video Demonstrations
Robotics
Robots learn new tasks from just one video.