Chart-to-Experience: Benchmarking Multimodal LLMs for Predicting Experiential Impact of Charts
By: Seon Gyeom Kim , Jae Young Choi , Ryan Rossi and more
Potential Business Impact:
Helps computers understand how charts make people feel.
The field of Multimodal Large Language Models (MLLMs) has made remarkable progress in visual understanding tasks, presenting a vast opportunity to predict the perceptual and emotional impact of charts. However, it also raises concerns, as many applications of LLMs are based on overgeneralized assumptions from a few examples, lacking sufficient validation of their performance and effectiveness. We introduce Chart-to-Experience, a benchmark dataset comprising 36 charts, evaluated by crowdsourced workers for their impact on seven experiential factors. Using the dataset as ground truth, we evaluated capabilities of state-of-the-art MLLMs on two tasks: direct prediction and pairwise comparison of charts. Our findings imply that MLLMs are not as sensitive as human evaluators when assessing individual charts, but are accurate and reliable in pairwise comparisons.
Similar Papers
Do MLLMs Really Understand the Charts?
Computation and Language
Helps computers truly understand charts, not just see them.
ChartEdit: How Far Are MLLMs From Automating Chart Analysis? Evaluating MLLMs' Capability via Chart Editing
Computation and Language
Computers can now edit charts, but not perfectly.
Evaluating Graphical Perception with Multimodal LLMs
CV and Pattern Recognition
Computers now understand charts better than people.