Do Vision-Language Models See Visualizations Like Humans? Alignment in Chart Categorization
By: Péter Ferenc Gyarmati , Manfred Klaffenböck , Laura Koesten and more
Potential Business Impact:
Helps computers "see" charts like people do.
Vision-language models (VLMs) hold promise for enhancing visualization tools, but effective human-AI collaboration hinges on a shared perceptual understanding of visual content. Prior studies assessed VLM visualization literacy through interpretive tasks, revealing an over-reliance on textual cues rather than genuine visual analysis. Our study investigates a more foundational skill underpinning such literacy: the ability of VLMs to recognize a chart's core visual properties as humans do. We task 13 diverse VLMs with classifying scientific visualizations based solely on visual stimuli, according to three criteria: purpose (e.g., schematic, GUI, visualization), encoding (e.g., bar, point, node-link), and dimensionality (e.g., 2D, 3D). Using expert labels from the human-centric VisType typology as ground truth, we find that VLMs often identify purpose and dimensionality accurately but struggle with specific encoding types. Our preliminary results show that larger models do not always equate to superior performance and highlight the need for careful integration of VLMs in visualization tasks, with human supervision to ensure reliable outcomes.
Similar Papers
Visual Language Models show widespread visual deficits on neuropsychological tests
CV and Pattern Recognition
Computers see things like humans, but miss basic details.
Can VLMs Assess Similarity Between Graph Visualizations?
Human-Computer Interaction
Helps computers "see" how graphs are alike.
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.