Score: 1

Do Vision-Language Models See Visualizations Like Humans? Alignment in Chart Categorization

Published: September 6, 2025 | arXiv ID: 2509.05718v1

By: Péter Ferenc Gyarmati , Manfred Klaffenböck , Laura Koesten and more

Potential Business Impact:

Helps computers "see" charts like people do.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-language models (VLMs) hold promise for enhancing visualization tools, but effective human-AI collaboration hinges on a shared perceptual understanding of visual content. Prior studies assessed VLM visualization literacy through interpretive tasks, revealing an over-reliance on textual cues rather than genuine visual analysis. Our study investigates a more foundational skill underpinning such literacy: the ability of VLMs to recognize a chart's core visual properties as humans do. We task 13 diverse VLMs with classifying scientific visualizations based solely on visual stimuli, according to three criteria: purpose (e.g., schematic, GUI, visualization), encoding (e.g., bar, point, node-link), and dimensionality (e.g., 2D, 3D). Using expert labels from the human-centric VisType typology as ground truth, we find that VLMs often identify purpose and dimensionality accurately but struggle with specific encoding types. Our preliminary results show that larger models do not always equate to superior performance and highlight the need for careful integration of VLMs in visualization tasks, with human supervision to ensure reliable outcomes.

Country of Origin
🇦🇹 🇦🇪 United Arab Emirates, Austria

Page Count
2 pages

Category
Computer Science:
Human-Computer Interaction