Vidarc: Embodied Video Diffusion Model for Closed-loop Control
By: Yao Feng , Chendong Xiang , Xinyi Mao and more
Robotic arm manipulation in data-scarce settings is a highly challenging task due to the complex embodiment dynamics and diverse contexts. Recent video-based approaches have shown great promise in capturing and transferring the temporal and physical interactions by pre-training on Internet-scale video data. However, such methods are often not optimized for the embodiment-specific closed-loop control, typically suffering from high latency and insufficient grounding. In this paper, we present Vidarc (Video Diffusion for Action Reasoning and Closed-loop Control), a novel autoregressive embodied video diffusion approach augmented by a masked inverse dynamics model. By grounding video predictions with action-relevant masks and incorporating real-time feedback through cached autoregressive generation, Vidarc achieves fast, accurate closed-loop control. Pre-trained on one million cross-embodiment episodes, Vidarc surpasses state-of-the-art baselines, achieving at least a 15% higher success rate in real-world deployment and a 91% reduction in latency. We also highlight its robust generalization and error correction capabilities across previously unseen robotic platforms.
Similar Papers
Real-Time Motion-Controllable Autoregressive Video Diffusion
CV and Pattern Recognition
Makes videos move exactly how you want, fast.
mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs
Robotics
Teaches robots to move by watching videos.
AnchorDream: Repurposing Video Diffusion for Embodiment-Aware Robot Data Synthesis
Robotics
Makes robots learn new skills from few examples.