Score: 0

Uncovering Cultural Representation Disparities in Vision-Language Models

Published: May 20, 2025 | arXiv ID: 2505.14729v3

By: Ram Mohan Rao Kadiyala , Siddhant Gupta , Jebish Purbey and more

Potential Business Impact:

Finds AI's unfair views on different countries.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) have demonstrated impressive capabilities across a range of tasks, yet concerns about their potential biases exist. This work investigates the extent to which prominent VLMs exhibit cultural biases by evaluating their performance on an image-based country identification task at a country level. Utilizing the geographically diverse Country211 dataset, we probe several large vision language models (VLMs) under various prompting strategies: open-ended questions, multiple-choice questions (MCQs) including challenging setups like multilingual and adversarial settings. Our analysis aims to uncover disparities in model accuracy across different countries and question formats, providing insights into how training data distribution and evaluation methodologies might influence cultural biases in VLMs. The findings highlight significant variations in performance, suggesting that while VLMs possess considerable visual understanding, they inherit biases from their pre-training data and scale that impact their ability to generalize uniformly across diverse global contexts.

Page Count
28 pages

Category
Computer Science:
CV and Pattern Recognition