Score: 1

GazeVLM: A Vision-Language Model for Multi-Task Gaze Understanding

Published: November 9, 2025 | arXiv ID: 2511.06348v1

By: Athul M. Mathew , Haithem Hermassi , Thariq Khalid and more

Potential Business Impact:

Helps computers understand where people are looking.

Business Areas:
Image Recognition Data and Analytics, Software

Gaze understanding unifies the detection of people, their gaze targets, and objects of interest into a single framework, offering critical insight into visual attention and intent estimation. Although prior research has modelled gaze cues in visual scenes, a unified system is still needed for gaze understanding using both visual and language prompts. This paper introduces GazeVLM, a novel Vision-Language Model (VLM) for multi-task gaze understanding in images, addressing person detection, gaze target detection, and gaze object identification. While other transformer-based methods exist for gaze analysis, GazeVLM represents, to our knowledge, the first application of a VLM to these combined tasks, allowing for selective execution of each task. Through the integration of visual (RGB and depth) and textual modalities, our ablation study on visual input combinations revealed that a fusion of RGB images with HHA-encoded depth maps, guided by text prompts, yields superior performance. We also introduce an object-level gaze detection metric for gaze object identification ($AP_{ob}$). Through experiments, GazeVLM demonstrates significant improvements, notably achieving state-of-the-art evaluation scores on GazeFollow and VideoAttentionTarget datasets.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition