Score: 0

Cross-Cultural Expert-Level Art Critique Evaluation with Vision-Language Models

Published: January 12, 2026 | arXiv ID: 2601.07984v1

By: Haorui Yu , Ramon Ruiz-Dolz , Xuehang Wen and more

Potential Business Impact:

Helps computers understand art from different cultures.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) excel at visual perception, yet their ability to interpret cultural meaning in art remains under-validated. We present a tri-tier evaluation framework for cross-cultural art-critique assessment: Tier I computes automated coverage and risk indicators offline; Tier II applies rubric-based scoring using a single primary judge across five dimensions; and Tier III calibrates the Tier II aggregate score to human ratings via isotonic regression, yielding a 5.2% reduction in MAE on a 152-sample held-out set. The framework outputs a calibrated cultural-understanding score for model selection and cultural-gap diagnosis, together with dimension-level diagnostics and risk indicators. We evaluate 15 VLMs on 294 expert anchors spanning six cultural traditions. Key findings are that (i) automated metrics are unreliable proxies for cultural depth, (ii) Western samples score higher than non-Western samples under our sampling and rubric, and (iii) cross-judge scale mismatch makes naive score averaging unreliable, motivating a single primary judge with explicit calibration. Dataset and code are available in the supplementary materials.

Page Count
16 pages

Category
Computer Science:
Computation and Language