InfoChartQA: A Benchmark for Multimodal Question Answering on Infographic Charts
By: Minzhi Lin , Tianchi Xie , Mengchen Liu and more
Potential Business Impact:
Helps computers understand pictures in charts better.
Understanding infographic charts with design-driven visual elements (e.g., pictograms, icons) requires both visual recognition and reasoning, posing challenges for multimodal large language models (MLLMs). However, existing visual-question answering benchmarks fall short in evaluating these capabilities of MLLMs due to the lack of paired plain charts and visual-element-based questions. To bridge this gap, we introduce InfoChartQA, a benchmark for evaluating MLLMs on infographic chart understanding. It includes 5,642 pairs of infographic and plain charts, each sharing the same underlying data but differing in visual presentations. We further design visual-element-based questions to capture their unique visual designs and communicative intent. Evaluation of 20 MLLMs reveals a substantial performance decline on infographic charts, particularly for visual-element-based questions related to metaphors. The paired infographic and plain charts enable fine-grained error analysis and ablation studies, which highlight new opportunities for advancing MLLMs in infographic chart understanding. We release InfoChartQA at https://github.com/CoolDawnAnt/InfoChartQA.
Similar Papers
Chart-HQA: A Benchmark for Hypothetical Question Answering in Charts
Computation and Language
Makes AI understand charts by asking "what if".
ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models
Computation and Language
Helps computers understand charts better.
ChartQAPro: A More Diverse and Challenging Benchmark for Chart Question Answering
Computation and Language
Helps computers understand charts better.