Uncovering Cultural Representation Disparities in Vision-Language Models
By: Ram Mohan Rao Kadiyala , Siddhant Gupta , Jebish Purbey and more
Potential Business Impact:
Finds AI's unfair views on different countries.
Vision-Language Models (VLMs) have demonstrated impressive capabilities across a range of tasks, yet concerns about their potential biases exist. This work investigates the extent to which prominent VLMs exhibit cultural biases by evaluating their performance on an image-based country identification task at a country level. Utilizing the geographically diverse Country211 dataset, we probe several large vision language models (VLMs) under various prompting strategies: open-ended questions, multiple-choice questions (MCQs) including challenging setups like multilingual and adversarial settings. Our analysis aims to uncover disparities in model accuracy across different countries and question formats, providing insights into how training data distribution and evaluation methodologies might influence cultural biases in VLMs. The findings highlight significant variations in performance, suggesting that while VLMs possess considerable visual understanding, they inherit biases from their pre-training data and scale that impact their ability to generalize uniformly across diverse global contexts.
Similar Papers
Cultural Awareness in Vision-Language Models: A Cross-Country Exploration
Computers and Society
Finds how computers see people and places unfairly.
CultureVLM: Characterizing and Improving Cultural Understanding of Vision-Language Models for over 100 Countries
Artificial Intelligence
Teaches AI to understand cultures worldwide.
Bias in the Picture: Benchmarking VLMs with Social-Cue News Images and LLM-as-Judge Assessment
CV and Pattern Recognition
Finds and fixes unfairness in AI that sees and reads.