CSGaze: Context-aware Social Gaze Prediction
By: Surbhi Madan , Shreya Ghosh , Ramanathan Subramanian and more
Potential Business Impact:
Helps computers understand where people are looking.
A person's gaze offers valuable insights into their focus of attention, level of social engagement, and confidence. In this work, we investigate how contextual cues combined with visual scene and facial information can be effectively utilized to predict and interpret social gaze patterns during conversational interactions. We introduce CSGaze, a context aware multimodal approach that leverages facial, scene information as complementary inputs to enhance social gaze pattern prediction from multi-person images. The model also incorporates a fine-grained attention mechanism centered on the principal speaker, which helps in better modeling social gaze dynamics. Experimental results show that CSGaze performs competitively with state-of-the-art methods on GP-Static, UCO-LAEO and AVA-LAEO. Our findings highlight the role of contextual cues in improving social gaze prediction. Additionally, we provide initial explainability through generated attention scores, offering insights into the model's decision-making process. We also demonstrate our model's generalizability by testing our model on open set datasets that demonstrating its robustness across diverse scenarios.
Similar Papers
Seeing My Future: Predicting Situated Interaction Behavior in Virtual Reality
CV and Pattern Recognition
Predicts what you'll do next in virtual worlds.
Eyes on Target: Gaze-Aware Object Detection in Egocentric Video
CV and Pattern Recognition
Helps computers see what people are looking at.
StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos
CV and Pattern Recognition
Teaches computers to understand what you're looking at.