GPT-5 Model Corrected GPT-4V's Chart Reading Errors, Not Prompting
By: Kaichun Yang, Jian Chen
Potential Business Impact:
New AI understands charts better than older AI.
We present a quantitative evaluation to understand the effect of zero-shot large-language model (LLMs) and prompting uses on chart reading tasks. We asked LLMs to answer 107 visualization questions to compare inference accuracies between the agentic GPT-5 and multimodal GPT-4V, for difficult image instances, where GPT-4V failed to produce correct answers. Our results show that model architecture dominates the inference accuracy: GPT5 largely improved accuracy, while prompt variants yielded only small effects. Pre-registration of this work is available here: https://osf.io/u78td/?view_only=6b075584311f48e991c39335c840ded3; the Google Drive materials are here:https://drive.google.com/file/d/1ll8WWZDf7cCNcfNWrLViWt8GwDNSvVrp/view.
Similar Papers
Benchmarking GPT-5 for biomedical natural language processing
Computation and Language
Helps doctors understand medical writing better.
Exploring GPT's Ability as a Judge in Music Understanding
Information Retrieval
Helps computers find mistakes in music.
Prompt-Based Clarity Evaluation and Topic Detection in Political Question Answering
Computation and Language
Makes AI better at answering questions clearly.