Cross-Cultural Expert-Level Art Critique Evaluation with Vision-Language Models
By: Haorui Yu , Ramon Ruiz-Dolz , Xuehang Wen and more
Potential Business Impact:
Helps computers understand art from different cultures.
Vision-Language Models (VLMs) excel at visual perception, yet their ability to interpret cultural meaning in art remains under-validated. We present a tri-tier evaluation framework for cross-cultural art-critique assessment: Tier I computes automated coverage and risk indicators offline; Tier II applies rubric-based scoring using a single primary judge across five dimensions; and Tier III calibrates the Tier II aggregate score to human ratings via isotonic regression, yielding a 5.2% reduction in MAE on a 152-sample held-out set. The framework outputs a calibrated cultural-understanding score for model selection and cultural-gap diagnosis, together with dimension-level diagnostics and risk indicators. We evaluate 15 VLMs on 294 expert anchors spanning six cultural traditions. Key findings are that (i) automated metrics are unreliable proxies for cultural depth, (ii) Western samples score higher than non-Western samples under our sampling and rubric, and (iii) cross-judge scale mismatch makes naive score averaging unreliable, motivating a single primary judge with explicit calibration. Dataset and code are available in the supplementary materials.
Similar Papers
CultureVLM: Characterizing and Improving Cultural Understanding of Vision-Language Models for over 100 Countries
Artificial Intelligence
Teaches AI to understand cultures worldwide.
Evaluation of Cultural Competence of Vision-Language Models
CV and Pattern Recognition
Teaches computers to understand cultural meanings in pictures.
Toward Socially Aware Vision-Language Models: Evaluating Cultural Competence Through Multimodal Story Generation
Computation and Language
AI stories change to match different cultures.