Score: 0

Disentangled Object-Centric Image Representation for Robotic Manipulation

Published: March 14, 2025 | arXiv ID: 2503.11565v1

By: David Emukpere , Romain Deffayet , Bingbing Wu and more

Potential Business Impact:

Robots learn to grab things better, even with many objects.

Business Areas:
Image Recognition Data and Analytics, Software

Learning robotic manipulation skills from vision is a promising approach for developing robotics applications that can generalize broadly to real-world scenarios. As such, many approaches to enable this vision have been explored with fruitful results. Particularly, object-centric representation methods have been shown to provide better inductive biases for skill learning, leading to improved performance and generalization. Nonetheless, we show that object-centric methods can struggle to learn simple manipulation skills in multi-object environments. Thus, we propose DOCIR, an object-centric framework that introduces a disentangled representation for objects of interest, obstacles, and robot embodiment. We show that this approach leads to state-of-the-art performance for learning pick and place skills from visual inputs in multi-object environments and generalizes at test time to changing objects of interest and distractors in the scene. Furthermore, we show its efficacy both in simulation and zero-shot transfer to the real world.

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition