Eye of the Beholder: Towards Measuring Visualization Complexity
By: Johannes Ellemose, Niklas Elmqvist
Constructing expressive and legible visualizations is a key activity for visualization designers. While numerous design guidelines exist, research on how specific graphical features affect perceived visual complexity remains limited. In this paper, we report on a crowdsourced study to collect human ratings of perceived complexity for diverse visualizations. Using these ratings as ground truth, we then evaluated three methods to estimate this perceived complexity: image analysis metrics, multilinear regression using manually coded visualization features, and automated feature extraction using a large language model (LLM). Image complexity metrics showed no correlation with human-perceived visualization complexity. Manual feature coding produced a reasonable predictive model but required substantial effort. In contrast, a zero-shot LLM (GPT-4o mini) demonstrated strong capabilities in both rating complexity and extracting relevant features. Our findings suggest that visualization complexity is truly in the eye of the beholder, yet can be effectively approximated using zero-shot LLM prompting, offering a scalable approach for evaluating the complexity of visualizations. The dataset and code for the study and data analysis can be found at https://osf.io/w85a4/
Similar Papers
What Makes a Visualization Complex?
Human-Computer Interaction
Makes charts easier to understand by measuring complexity.
VisJudge-Bench: Aesthetics and Quality Assessment of Visualizations
Computation and Language
Helps computers judge if charts show data clearly.
Measuring and predicting variation in the difficulty of questions about data visualizations
Human-Computer Interaction
Finds why some charts are harder to understand.