Adapt3R: Adaptive 3D Scene Representation for Domain Transfer in Imitation Learning
By: Albert Wilcox , Mohamed Ghanem , Masoud Moghani and more
Potential Business Impact:
Teaches robots to do new jobs without retraining.
Imitation Learning can train robots to perform complex and diverse manipulation tasks, but learned policies are brittle with observations outside of the training distribution. 3D scene representations that incorporate observations from calibrated RGBD cameras have been proposed as a way to mitigate this, but in our evaluations with unseen embodiments and camera viewpoints they show only modest improvement. To address those challenges, we propose Adapt3R, a general-purpose 3D observation encoder which synthesizes data from calibrated RGBD cameras into a vector that can be used as conditioning for arbitrary IL algorithms. The key idea is to use a pretrained 2D backbone to extract semantic information, using 3D only as a medium to localize this information with respect to the end-effector. We show across 93 simulated and 6 real tasks that when trained end-to-end with a variety of IL algorithms, Adapt3R maintains these algorithms' learning capacity while enabling zero-shot transfer to novel embodiments and camera poses.
Similar Papers
Rig3R: Rig-Aware Conditioning for Learned 3D Reconstruction
CV and Pattern Recognition
Helps robots understand 3D space better.
4D3R: Motion-Aware Neural Reconstruction and Rendering of Dynamic Scenes from Monocular Videos
CV and Pattern Recognition
Creates realistic 3D videos from regular videos.
ManiVID-3D: Generalizable View-Invariant Reinforcement Learning for Robotic Manipulation via Disentangled 3D Representations
Robotics
Robots can do tasks even if camera moves.