Do MLLMs Really Understand the Charts?
By: Xiao Zhang , Dongyuan Li , Liuyu Xiang and more
Potential Business Impact:
Helps computers truly understand charts, not just see them.
Although Multimodal Large Language Models (MLLMs) have demonstrated increasingly impressive performance in chart understanding, most of them exhibit alarming hallucinations and significant performance degradation when handling non-annotated charts. Therefore, a question arises: Do MLLMs really understand the charts? Since a human is capable of understanding charts and estimating the values by visual reasoning, we first carefully establish a comprehensive Chart Reasoning Benchmark CRBench to rigorously evaluate the visual reasoning abilities of MLLMs on non-annotated charts. We argue that MLLMs are primarily relying on recognition rather than reasoning to interpret the charts. To steer MLLMs to reasonable chart understanding, we propose ChartReasoner that mimics human behavior by grounding their estimation in chart understanding. Extensive results on the proposed CRBench show that ChartReasnoner-3B/7B achieves superior performance in chart reasoning, even compared to GPT-4o and Gemini-2.5-Flash. More importantly, ChartReasnoner also demonstrates the visual reasoning abilities in general chart comprehension on public benchmarks, leading to significant performance gains and enabling MLLMs to rationally understand the charts. The code and dataset will be publicly available upon publication.
Similar Papers
ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models
Computation and Language
Helps computers understand charts better.
Chart-to-Experience: Benchmarking Multimodal LLMs for Predicting Experiential Impact of Charts
Human-Computer Interaction
Helps computers understand how charts make people feel.
Evaluating Graphical Perception with Multimodal LLMs
CV and Pattern Recognition
Computers now understand charts better than people.