Disentangled Object-Centric Image Representation for Robotic Manipulation
By: David Emukpere , Romain Deffayet , Bingbing Wu and more
Potential Business Impact:
Robots learn to grab things better, even with many objects.
Learning robotic manipulation skills from vision is a promising approach for developing robotics applications that can generalize broadly to real-world scenarios. As such, many approaches to enable this vision have been explored with fruitful results. Particularly, object-centric representation methods have been shown to provide better inductive biases for skill learning, leading to improved performance and generalization. Nonetheless, we show that object-centric methods can struggle to learn simple manipulation skills in multi-object environments. Thus, we propose DOCIR, an object-centric framework that introduces a disentangled representation for objects of interest, obstacles, and robot embodiment. We show that this approach leads to state-of-the-art performance for learning pick and place skills from visual inputs in multi-object environments and generalizes at test time to changing objects of interest and distractors in the scene. Furthermore, we show its efficacy both in simulation and zero-shot transfer to the real world.
Similar Papers
Object-Centric Representations Improve Policy Generalization in Robot Manipulation
Robotics
Robots learn to grab things better by seeing objects.
Zero-Shot Visual Generalization in Robot Manipulation
Robotics
Robots learn to do tasks in new places.
Are We Done with Object-Centric Learning?
CV and Pattern Recognition
Teaches computers to see objects separately.