GazeVLM: A Vision-Language Model for Multi-Task Gaze Understanding
By: Athul M. Mathew , Haithem Hermassi , Thariq Khalid and more
Potential Business Impact:
Helps computers understand where people are looking.
Gaze understanding unifies the detection of people, their gaze targets, and objects of interest into a single framework, offering critical insight into visual attention and intent estimation. Although prior research has modelled gaze cues in visual scenes, a unified system is still needed for gaze understanding using both visual and language prompts. This paper introduces GazeVLM, a novel Vision-Language Model (VLM) for multi-task gaze understanding in images, addressing person detection, gaze target detection, and gaze object identification. While other transformer-based methods exist for gaze analysis, GazeVLM represents, to our knowledge, the first application of a VLM to these combined tasks, allowing for selective execution of each task. Through the integration of visual (RGB and depth) and textual modalities, our ablation study on visual input combinations revealed that a fusion of RGB images with HHA-encoded depth maps, guided by text prompts, yields superior performance. We also introduce an object-level gaze detection metric for gaze object identification ($AP_{ob}$). Through experiments, GazeVLM demonstrates significant improvements, notably achieving state-of-the-art evaluation scores on GazeFollow and VideoAttentionTarget datasets.
Similar Papers
Eye Gaze Tells You Where to Compute: Gaze-Driven Efficient VLMs
CV and Pattern Recognition
Makes smart glasses understand things faster.
From Gaze to Insight: Bridging Human Visual Attention and Vision Language Model Explanation for Weakly-Supervised Medical Image Segmentation
CV and Pattern Recognition
Helps doctors find sickness in scans faster.
Gaze-VLM:Bridging Gaze and VLMs through Attention Regularization for Egocentric Understanding
CV and Pattern Recognition
Makes computers understand what you're looking at.