Understanding Why ChatGPT Outperforms Humans in Visualization Design Advice
By: Yongsu Ahn, Nam Wook Kim
Potential Business Impact:
AI understands pictures better than people.
This paper investigates why recent generative AI models outperform humans in data visualization knowledge tasks. Through systematic comparative analysis of responses to visualization questions, we find that differences exist between two ChatGPT models and human outputs over rhetorical structure, knowledge breadth, and perceptual quality. Our findings reveal that ChatGPT-4, as a more advanced model, displays a hybrid of characteristics from both humans and ChatGPT-3.5. The two models were generally favored over human responses, while their strengths in coverage and breadth, and emphasis on technical and task-oriented visualization feedback collectively shaped higher overall quality. Based on our findings, we draw implications for advancing user experiences based on the potential of LLMs and human perception over their capabilities, with relevance to broader applications of AI.
Similar Papers
A Comparison of Human and ChatGPT Classification Performance on Complex Social Media Data
Computation and Language
AI struggles to understand tricky words.
Stable diffusion models reveal a persisting human and AI gap in visual creativity
Artificial Intelligence
AI makes pictures, but artists are more creative.
Personality over Precision: Exploring the Influence of Human-Likeness on ChatGPT Use for Search
Human-Computer Interaction
Makes people trust wrong answers from chatbots.