EncQA: Benchmarking Vision-Language Models on Visual Encodings for Charts
By: Kushin Mukherjee , Donghao Ren , Dominik Moritz and more
Potential Business Impact:
Helps computers better understand charts and graphs.
Multimodal vision-language models (VLMs) continue to achieve ever-improving scores on chart understanding benchmarks. Yet, we find that this progress does not fully capture the breadth of visual reasoning capabilities essential for interpreting charts. We introduce EncQA, a novel benchmark informed by the visualization literature, designed to provide systematic coverage of visual encodings and analytic tasks that are crucial for chart understanding. EncQA provides 2,076 synthetic question-answer pairs, enabling balanced coverage of six visual encoding channels (position, length, area, color quantitative, color nominal, and shape) and eight tasks (find extrema, retrieve value, find anomaly, filter values, compute derived value exact, compute derived value relative, correlate values, and correlate values relative). Our evaluation of 9 state-of-the-art VLMs reveals that performance varies significantly across encodings within the same task, as well as across tasks. Contrary to expectations, we observe that performance does not improve with model size for many task-encoding pairs. Our results suggest that advancing chart understanding requires targeted strategies addressing specific visual reasoning gaps, rather than solely scaling up model or dataset size.
Similar Papers
CartoMapQA: A Fundamental Benchmark Dataset Evaluating Vision-Language Models on Cartographic Map Understanding
CV and Pattern Recognition
Helps computers understand maps like people do.
InterChart: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
Computation and Language
Helps computers understand many charts together.
VQArt-Bench: A semantically rich VQA Benchmark for Art and Cultural Heritage
CV and Pattern Recognition
Tests if computers truly understand art.