ManipDreamer3D : Synthesizing Plausible Robotic Manipulation Video with Occupancy-aware 3D Trajectory
By: Ying Li , Xiaobao Wei , Xiaowei Chi and more
Potential Business Impact:
Robots learn to move objects from pictures and words.
Data scarcity continues to be a major challenge in the field of robotic manipulation. Although diffusion models provide a promising solution for generating robotic manipulation videos, existing methods largely depend on 2D trajectories, which inherently face issues with 3D spatial ambiguity. In this work, we present a novel framework named ManipDreamer3D for generating plausible 3D-aware robotic manipulation videos from the input image and the text instruction. Our method combines 3D trajectory planning with a reconstructed 3D occupancy map created from a third-person perspective, along with a novel trajectory-to-video diffusion model. Specifically, ManipDreamer3D first reconstructs the 3D occupancy representation from the input image and then computes an optimized 3D end-effector trajectory, minimizing path length while avoiding collisions. Next, we employ a latent editing technique to create video sequences from the initial image latent and the optimized 3D trajectory. This process conditions our specially trained trajectory-to-video diffusion model to produce robotic pick-and-place videos. Our method generates robotic videos with autonomously planned plausible 3D trajectories, significantly reducing human intervention requirements. Experimental results demonstrate superior visual quality compared to existing methods.
Similar Papers
Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control
CV and Pattern Recognition
Robots learn to move objects together better.
AnchorDream: Repurposing Video Diffusion for Embodiment-Aware Robot Data Synthesis
Robotics
Makes robots learn new skills from few examples.
PoseTraj: Pose-Aware Trajectory Control in Video Diffusion
CV and Pattern Recognition
Makes videos move objects realistically in 3D.