Generalist Robot Manipulation beyond Action Labeled Data
By: Alexander Spiridonov , Jan-Nico Zaech , Nikolay Nikolov and more
Potential Business Impact:
Robots learn new tasks from watching videos.
Recent advances in generalist robot manipulation leverage pre-trained Vision-Language Models (VLMs) and large-scale robot demonstrations to tackle diverse tasks in a zero-shot manner. A key challenge remains: scaling high-quality, action-labeled robot demonstration data, which existing methods rely on for robustness and generalization. To address this, we propose a method that benefits from videos without action labels - featuring humans and/or robots in action - enhancing open-vocabulary performance and enabling data-efficient learning of new tasks. Our method extracts dense, dynamic 3D point clouds at the hand or gripper location and uses a proposed 3D dynamics predictor for self-supervision. This predictor is then tuned to an action predictor using a smaller labeled dataset for action alignment. We show that our method not only learns from unlabeled human and robot demonstrations - improving downstream generalist robot policies - but also enables robots to learn new tasks without action labels (i.e., out-of-action generalization) in both real-world and simulated settings.
Similar Papers
Improving Generalization of Language-Conditioned Robot Manipulation
Robotics
Robots learn to move objects with few examples.
Latent Action Pretraining Through World Modeling
Robotics
Teaches robots to do tasks from watching videos.
Enhancing Generalization in Vision-Language-Action Models by Preserving Pretrained Representations
Robotics
Robots learn to do new jobs by watching and reading.