ChartAB: A Benchmark for Chart Grounding & Dense Alignment
By: Aniruddh Bansal , Davit Soselia , Dang Nguyen and more
Potential Business Impact:
Helps computers understand charts better.
Charts play an important role in visualization, reasoning, data analysis, and the exchange of ideas among humans. However, existing vision-language models (VLMs) still lack accurate perception of details and struggle to extract fine-grained structures from charts. Such limitations in chart grounding also hinder their ability to compare multiple charts and reason over them. In this paper, we introduce a novel "ChartAlign Benchmark (ChartAB)" to provide a comprehensive evaluation of VLMs in chart grounding tasks, i.e., extracting tabular data, localizing visualization elements, and recognizing various attributes from charts of diverse types and complexities. We design a JSON template to facilitate the calculation of evaluation metrics specifically tailored for each grounding task. By incorporating a novel two-stage inference workflow, the benchmark can further evaluate VLMs' capability to align and compare elements/attributes across two charts. Our analysis of evaluations on several recent VLMs reveals new insights into their perception biases, weaknesses, robustness, and hallucinations in chart understanding. These findings highlight the fine-grained discrepancies among VLMs in chart understanding tasks and point to specific skills that need to be strengthened in current models.
Similar Papers
ChartAB: A Benchmark for Chart Grounding & Dense Alignment
CV and Pattern Recognition
Helps computers understand charts and graphs better.
ChartAnchor: Chart Grounding with Structural-Semantic Fidelity
Artificial Intelligence
Helps computers understand charts and their data.
InterChart: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
Computation and Language
Helps computers understand many charts together.