Vision Language Models are Confused Tourists
By: Patrick Amadeus Irawan , Ikhlasul Akmal Hanif , Muhammad Dehan Al Kautsar and more
Potential Business Impact:
Makes AI understand different cultures better.
Although the cultural dimension has been one of the key aspects in evaluating Vision-Language Models (VLMs), their ability to remain stable across diverse cultural inputs remains largely untested, despite being crucial to support diversity and multicultural societies. Existing evaluations often rely on benchmarks featuring only a singular cultural concept per image, overlooking scenarios where multiple, potentially unrelated cultural cues coexist. To address this gap, we introduce ConfusedTourist, a novel cultural adversarial robustness suite designed to assess VLMs' stability against perturbed geographical cues. Our experiments reveal a critical vulnerability, where accuracy drops heavily under simple image-stacking perturbations and even worsens with its image-generation-based variant. Interpretability analyses further show that these failures stem from systematic attention shifts toward distracting cues, diverting the model from its intended focus. These findings highlight a critical challenge: visual cultural concept mixing can substantially impair even state-of-the-art VLMs, underscoring the urgent need for more culturally robust multimodal understanding.
Similar Papers
Uncovering Cultural Representation Disparities in Vision-Language Models
CV and Pattern Recognition
Finds AI's unfair views on different countries.
CultureVLM: Characterizing and Improving Cultural Understanding of Vision-Language Models for over 100 Countries
Artificial Intelligence
Teaches AI to understand cultures worldwide.
Cultural Awareness in Vision-Language Models: A Cross-Country Exploration
Computers and Society
Finds how computers see people and places unfairly.