Object-centric 3D Motion Field for Robot Learning from Human Videos
By: Zhao-Heng Yin, Sherry Yang, Pieter Abbeel
Potential Business Impact:
Robots learn to do tasks by watching videos.
Learning robot control policies from human videos is a promising direction for scaling up robot learning. However, how to extract action knowledge (or action representations) from videos for policy learning remains a key challenge. Existing action representations such as video frames, pixelflow, and pointcloud flow have inherent limitations such as modeling complexity or loss of information. In this paper, we propose to use object-centric 3D motion field to represent actions for robot learning from human videos, and present a novel framework for extracting this representation from videos for zero-shot control. We introduce two novel components in its implementation. First, a novel training pipeline for training a ''denoising'' 3D motion field estimator to extract fine object 3D motions from human videos with noisy depth robustly. Second, a dense object-centric 3D motion field prediction architecture that favors both cross-embodiment transfer and policy generalization to background. We evaluate the system in real world setups. Experiments show that our method reduces 3D motion estimation error by over 50% compared to the latest method, achieve 55% average success rate in diverse tasks where prior approaches fail~($\lesssim 10$\%), and can even acquire fine-grained manipulation skills like insertion.
Similar Papers
VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
Robotics
Teaches robots to do tasks from watching videos.
3D Flow Diffusion Policy: Visuomotor Policy Learning via Generating Flow in 3D Space
Robotics
Robots learn to grab and move things better.
MotionFlow:Learning Implicit Motion Flow for Complex Camera Trajectory Control in Video Generation
CV and Pattern Recognition
Makes videos follow camera moves perfectly.