Can Vision Language Models Infer Human Gaze Direction? A Controlled Study
By: Zory Zhang , Pinyuan Feng , Bingyang Wang and more
Potential Business Impact:
Computers can't tell where people are looking.
Gaze-referential inference--the ability to infer what others are looking at--is a critical component of a theory of mind that underpins natural human-AI interaction. In a controlled study, we evaluated this skill across 111 Vision Language Models (VLMs) using photos taken with manipulated difficulty and variability, comparing performance with that of human participants (N = 65), and analyzed behaviors using mixed-effects models. We found that 94 of the 111 VLMs failed to do better than random guessing, while humans achieved near-ceiling accuracy. VLMs even respond with each choice almost equally frequently. Are they randomly guessing? Although most VLMs struggle, when we zoom in on five of the top-tier VLMs with above-chance performance, we find that their performance declined with increasing task difficulty but varied only slightly across different prompts and scene objects. These behavioral features cannot be explained by considering them as random guessers. Instead, they likely use a combination of heuristics and guessing such that their performance is subject to the task difficulty but robust to perceptual variations. This suggests that VLMs, lacking gaze inference capability, have yet to become technologies that can naturally interact with humans, but the potential remains.
Similar Papers
VL4Gaze: Unleashing Vision-Language Models for Gaze Following
CV and Pattern Recognition
Teaches computers to understand where people are looking.
Eye Gaze Tells You Where to Compute: Gaze-Driven Efficient VLMs
CV and Pattern Recognition
Makes smart glasses understand things faster.
Not There Yet: Evaluating Vision Language Models in Simulating the Visual Perception of People with Low Vision
CV and Pattern Recognition
Helps computers understand how people with poor vision see.