VULCA-Bench: A Multicultural Vision-Language Benchmark for Evaluating Cultural Understanding
By: Haorui Yu , Ramon Ruiz-Dolz , Diji Yang and more
We introduce VULCA-Bench, a multicultural art-critique benchmark for evaluating Vision-Language Models' (VLMs) cultural understanding beyond surface-level visual perception. Existing VLM benchmarks predominantly measure L1-L2 capabilities (object recognition, scene description, and factual question answering) while under-evaluate higher-order cultural interpretation. VULCA-Bench contains 7,410 matched image-critique pairs spanning eight cultural traditions, with Chinese-English bilingual coverage. We operationalise cultural understanding using a five-layer framework (L1-L5, from Visual Perception to Philosophical Aesthetics), instantiated as 225 culture-specific dimensions and supported by expert-written bilingual critiques. Our pilot results indicate that higher-layer reasoning (L3-L5) is consistently more challenging than visual and technical analysis (L1-L2). The dataset, evaluation scripts, and annotation tools are available under CC BY 4.0 in the supplementary materials.
Similar Papers
Cross-Cultural Expert-Level Art Critique Evaluation with Vision-Language Models
Computation and Language
Helps computers understand art from different cultures.
IndicVisionBench: Benchmarking Cultural and Multilingual Understanding in VLMs
CV and Pattern Recognition
Tests AI on Indian languages and culture.
CultureVLM: Characterizing and Improving Cultural Understanding of Vision-Language Models for over 100 Countries
Artificial Intelligence
Teaches AI to understand cultures worldwide.