Score: 0

Beyond Gaze Overlap: Analyzing Joint Visual Attention Dynamics Using Egocentric Data

Published: September 15, 2025 | arXiv ID: 2509.12419v1

By: Kumushini Thennakoon , Yasasi Abeysinghe , Bhanuka Mahanama and more

Potential Business Impact:

Shows when people look at the same thing.

Business Areas:
Image Recognition Data and Analytics, Software

Joint visual attention (JVA) provides informative cues on human behavior during social interactions. The ubiquity of egocentric eye-trackers and large-scale datasets on everyday interactions offer research opportunities in identifying JVA in multi-user environments. We propose a novel approach utilizing spatiotemporal tubes centered on attention rendered by individual gaze and detect JVA using deep-learning-based feature mapping. Our results reveal object-focused collaborative tasks to yield higher JVA (44-46%), whereas independent tasks yield lower (4-5%) attention. Beyond JVA, we analyze attention characteristics using ambient-focal attention coefficient K to understand the qualitative aspects of shared attention. Our analysis reveals $\mathcal{K}$ to converge instances where participants interact with shared objects while diverging when independent. While our study presents seminal findings on joint attention with egocentric commodity eye trackers, it indicates the potential utility of our approach in psychology, human-computer interaction, and social robotics, particularly in understanding attention coordination mechanisms in ecologically valid contexts.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Human-Computer Interaction