Eyes on Target: Gaze-Aware Object Detection in Egocentric Video
By: Vishakha Lall, Yisi Liu
Potential Business Impact:
Helps computers see what people are looking at.
Human gaze offers rich supervisory signals for understanding visual attention in complex visual environments. In this paper, we propose Eyes on Target, a novel depth-aware and gaze-guided object detection framework designed for egocentric videos. Our approach injects gaze-derived features into the attention mechanism of a Vision Transformer (ViT), effectively biasing spatial feature selection toward human-attended regions. Unlike traditional object detectors that treat all regions equally, our method emphasises viewer-prioritised areas to enhance object detection. We validate our method on an egocentric simulator dataset where human visual attention is critical for task assessment, illustrating its potential in evaluating human performance in simulation scenarios. We evaluate the effectiveness of our gaze-integrated model through extensive experiments and ablation studies, demonstrating consistent gains in detection accuracy over gaze-agnostic baselines on both the custom simulator dataset and public benchmarks, including Ego4D Ego-Motion and Ego-CH-Gaze datasets. To interpret model behaviour, we also introduce a gaze-aware attention head importance metric, revealing how gaze cues modulate transformer attention dynamics.
Similar Papers
In the Eye of MLLM: Benchmarking Egocentric Video Intent Understanding with Gaze-Guided Prompting
CV and Pattern Recognition
AI watches where you look to help you better.
Beyond Gaze Overlap: Analyzing Joint Visual Attention Dynamics Using Egocentric Data
Human-Computer Interaction
Shows when people look at the same thing.
Gaze-Guided Learning: Avoiding Shortcut Bias in Visual Classification
CV and Pattern Recognition
Guides computers to see like humans, improving accuracy.