Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers
By: Ian Chuang , Andrew Lee , Dechen Gao and more
Potential Business Impact:
Robots see better by looking like humans.
Human vision is a highly active process driven by gaze, which directs attention and fixation to task-relevant regions and dramatically reduces visual processing. In contrast, robot learning systems typically rely on passive, uniform processing of raw camera images. In this work, we explore how incorporating human-like active gaze into robotic policies can enhance both efficiency and performance. We build on recent advances in foveated image processing and apply them to an Active Vision robot system that emulates both human head movement and eye tracking. Extending prior work on the AV-ALOHA robot simulation platform, we introduce a framework for simultaneously collecting eye-tracking data and robot demonstrations from a human operator as well as a simulation benchmark and dataset for training robot policies that incorporate human gaze. Given the widespread use of Vision Transformers (ViTs) in robot learning, we integrate gaze information into ViTs using a foveated patch tokenization scheme inspired by recent work in image segmentation. Compared to uniform patch tokenization, this significantly reduces the number of tokens-and thus computation-without sacrificing visual fidelity near regions of interest. We also explore two approaches to gaze imitation and prediction from human data. The first is a two-stage model that predicts gaze to guide foveation and action; the second integrates gaze into the action space, allowing the policy to jointly predict gaze and actions end-to-end. Our results show that our method for foveated robot vision not only drastically reduces computational overhead, but also improves performance for high precision tasks and robustness to unseen distractors. Together, these findings suggest that human-inspired visual processing offers a useful inductive bias for robotic vision systems. https://ian-chuang.github.io/gaze-av-aloha/
Similar Papers
Impact of Gaze-Based Interaction and Augmentation on Human-Robot Collaboration in Critical Tasks
Robotics
Helps robots find people faster using eye movements.
Look, Zoom, Understand: The Robotic Eyeball for Embodied Perception
Robotics
Robotic eye learns to look and zoom for details.
Vision in Action: Learning Active Perception from Human Demonstrations
Robotics
Robots learn to grab things by watching people.