DRAW2ACT: Turning Depth-Encoded Trajectories into Robotic Demonstration Videos
By: Yang Bai , Liudi Yang , George Eskandar and more
Potential Business Impact:
Helps robots learn to do tasks by watching videos.
Video diffusion models provide powerful real-world simulators for embodied AI but remain limited in controllability for robotic manipulation. Recent works on trajectory-conditioned video generation address this gap but often rely on 2D trajectories or single modality conditioning, which restricts their ability to produce controllable and consistent robotic demonstrations. We present DRAW2ACT, a depth-aware trajectory-conditioned video generation framework that extracts multiple orthogonal representations from the input trajectory, capturing depth, semantics, shape and motion, and injects them into the diffusion model. Moreover, we propose to jointly generate spatially aligned RGB and depth videos, leveraging cross-modality attention mechanisms and depth supervision to enhance the spatio-temporal consistency. Finally, we introduce a multimodal policy model conditioned on the generated RGB and depth sequences to regress the robot's joint angles. Experiments on Bridge V2, Berkeley Autolab, and simulation benchmarks show that DRAW2ACT achieves superior visual fidelity and consistency while yielding higher manipulation success rates compared to existing baselines.
Similar Papers
Video2Act: A Dual-System Video Diffusion Policy with Robotic Spatio-Motional Modeling
Robotics
Helps robots learn to move and grab things.
RealisMotion: Decomposed Human Motion Control and Video Generation in the World Space
CV and Pattern Recognition
Lets you make videos of anyone doing anything.
Generative Video Motion Editing with 3D Point Tracks
CV and Pattern Recognition
Edits videos by changing how things move.