Invariance Co-training for Robot Visual Generalization
By: Jonathan Yang, Chelsea Finn, Dorsa Sadigh
Reasoning from diverse observations is a fundamental capability for generalist robot policies to operate in a wide range of environments. Despite recent advancements, many large-scale robotic policies still remain sensitive to key sources of observational variation such as changes in camera perspective, lighting, and the presence of distractor objects. We posit that the limited generalizability of these models arises from the substantial diversity required to robustly cover these quasistatic axes, coupled with the current scarcity of large-scale robotic datasets that exhibit rich variation across them. In this work, we propose to systematically examine what robots need to generalize across these challenging axes by introducing two key auxiliary tasks, state similarity and invariance to observational perturbations, applied to both demonstration data and static visual data. We then show that via these auxiliary tasks, leveraging both more-expensive robotic demonstration data and less-expensive, visually rich synthetic images generated from non-physics-based simulation (for example, Unreal Engine) can lead to substantial increases in generalization to unseen camera viewpoints, lighting configurations, and distractor conditions. Our results demonstrate that co-training on this diverse data improves performance by 18 percent over existing generative augmentation methods. For more information and videos, please visit https://invariance-cotraining.github.io
Similar Papers
A Study on Enhancing the Generalization Ability of Visuomotor Policies via Data Augmentation
Robotics
Teaches robots to do tasks in new places.
Zero-Shot Visual Generalization in Robot Manipulation
Robotics
Robots learn to do tasks in new places.
Improving Generalization of Language-Conditioned Robot Manipulation
Robotics
Robots learn to move objects with few examples.