Score: 0

GPT-5 Model Corrected GPT-4V's Chart Reading Errors, Not Prompting

Published: October 8, 2025 | arXiv ID: 2510.06782v1

By: Kaichun Yang, Jian Chen

Potential Business Impact:

New AI understands charts better than older AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present a quantitative evaluation to understand the effect of zero-shot large-language model (LLMs) and prompting uses on chart reading tasks. We asked LLMs to answer 107 visualization questions to compare inference accuracies between the agentic GPT-5 and multimodal GPT-4V, for difficult image instances, where GPT-4V failed to produce correct answers. Our results show that model architecture dominates the inference accuracy: GPT5 largely improved accuracy, while prompt variants yielded only small effects. Pre-registration of this work is available here: https://osf.io/u78td/?view_only=6b075584311f48e991c39335c840ded3; the Google Drive materials are here:https://drive.google.com/file/d/1ll8WWZDf7cCNcfNWrLViWt8GwDNSvVrp/view.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
17 pages

Category
Computer Science:
Human-Computer Interaction