Intent at a Glance: Gaze-Guided Robotic Manipulation via Foundation Models
By: Tracey Yee Hsin Tay , Xu Yan , Jonathan Ouyang and more
Potential Business Impact:
Robot follows your eyes to do tasks.
Designing intuitive interfaces for robotic control remains a central challenge in enabling effective human-robot interaction, particularly in assistive care settings. Eye gaze offers a fast, non-intrusive, and intent-rich input modality, making it an attractive channel for conveying user goals. In this work, we present GAMMA (Gaze Assisted Manipulation for Modular Autonomy), a system that leverages ego-centric gaze tracking and a vision-language model to infer user intent and autonomously execute robotic manipulation tasks. By contextualizing gaze fixations within the scene, the system maps visual attention to high-level semantic understanding, enabling skill selection and parameterization without task-specific training. We evaluate GAMMA on a range of table-top manipulation tasks and compare it against baseline gaze-based control without reasoning. Results demonstrate that GAMMA provides robust, intuitive, and generalizable control, highlighting the potential of combining foundation models and gaze for natural and scalable robot autonomy. Project website: https://gamma0.vercel.app/
Similar Papers
MindEye-OmniAssist: A Gaze-Driven LLM-Enhanced Assistive Robot System for Implicit Intention Recognition and Task Execution
Robotics
Robots understand what you want by watching your eyes.
RaycastGrasp: Eye-Gaze Interaction with Wearable Devices for Robotic Manipulation
Robotics
Lets robots grab things by looking at them.
Look, Focus, Act: Efficient and Robust Robot Learning via Human Gaze and Foveated Vision Transformers
Robotics
Robots see better by looking like humans.