Beyond Gaze Overlap: Analyzing Joint Visual Attention Dynamics Using Egocentric Data
By: Kumushini Thennakoon , Yasasi Abeysinghe , Bhanuka Mahanama and more
Potential Business Impact:
Shows when people look at the same thing.
Joint visual attention (JVA) provides informative cues on human behavior during social interactions. The ubiquity of egocentric eye-trackers and large-scale datasets on everyday interactions offer research opportunities in identifying JVA in multi-user environments. We propose a novel approach utilizing spatiotemporal tubes centered on attention rendered by individual gaze and detect JVA using deep-learning-based feature mapping. Our results reveal object-focused collaborative tasks to yield higher JVA (44-46%), whereas independent tasks yield lower (4-5%) attention. Beyond JVA, we analyze attention characteristics using ambient-focal attention coefficient K to understand the qualitative aspects of shared attention. Our analysis reveals $\mathcal{K}$ to converge instances where participants interact with shared objects while diverging when independent. While our study presents seminal findings on joint attention with egocentric commodity eye trackers, it indicates the potential utility of our approach in psychology, human-computer interaction, and social robotics, particularly in understanding attention coordination mechanisms in ecologically valid contexts.
Similar Papers
Eyes on Target: Gaze-Aware Object Detection in Egocentric Video
CV and Pattern Recognition
Helps computers see what people are looking at.
In the Eye of MLLM: Benchmarking Egocentric Video Intent Understanding with Gaze-Guided Prompting
CV and Pattern Recognition
AI watches where you look to help you better.
HeedVision: Attention Awareness in Collaborative Immersive Analytics Environments
Human-Computer Interaction
Shows where everyone is looking in virtual reality.