Visual Language Models show widespread visual deficits on neuropsychological tests
By: Gene Tangtartharakul, Katherine R. Storrs
Potential Business Impact:
Computers see things like humans, but miss basic details.
Visual Language Models (VLMs) show remarkable performance in visual reasoning tasks, successfully tackling college-level challenges that require high-level understanding of images. However, some recent reports of VLMs struggling to reason about elemental visual concepts like orientation, position, continuity, and occlusion suggest a potential gulf between human and VLM vision. Here we use the toolkit of neuropsychology to systematically assess the capabilities of three state-of-the-art VLMs across visual domains. Using 51 tests drawn from six clinical and experimental batteries, we characterise the visual abilities of leading VLMs relative to normative performance in healthy adults. While the models excel in straightforward object recognition tasks, we find widespread deficits in low- and mid-level visual abilities that would be considered clinically significant in humans. These selective deficits, profiled through validated test batteries, suggest that an artificial system can achieve complex object recognition without developing foundational visual concepts that in humans require no explicit training.
Similar Papers
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.
VLMs have Tunnel Vision: Evaluating Nonlocal Visual Reasoning in Leading VLMs
CV and Pattern Recognition
Computers can't connect image parts like humans.
Caption This, Reason That: VLMs Caught in the Middle
CV and Pattern Recognition
Helps computers understand pictures better by thinking.