Bias in the Picture: Benchmarking VLMs with Social-Cue News Images and LLM-as-Judge Assessment
By: Aravind Narayanan, Vahid Reza Khazaie, Shaina Raza
Potential Business Impact:
Finds and fixes unfairness in AI that sees and reads.
Large vision-language models (VLMs) can jointly interpret images and text, but they are also prone to absorbing and reproducing harmful social stereotypes when visual cues such as age, gender, race, clothing, or occupation are present. To investigate these risks, we introduce a news-image benchmark consisting of 1,343 image-question pairs drawn from diverse outlets, which we annotated with ground-truth answers and demographic attributes (age, gender, race, occupation, and sports). We evaluate a range of state-of-the-art VLMs and employ a large language model (LLM) as judge, with human verification. Our findings show that: (i) visual context systematically shifts model outputs in open-ended settings; (ii) bias prevalence varies across attributes and models, with particularly high risk for gender and occupation; and (iii) higher faithfulness does not necessarily correspond to lower bias. We release the benchmark prompts, evaluation rubric, and code to support reproducible and fairness-aware multimodal assessment.
Similar Papers
Vision-Language Models display a strong gender bias
CV and Pattern Recognition
Finds unfair gender ideas in AI that sees and reads.
Zero-shot image privacy classification with Vision-Language Models
CV and Pattern Recognition
Makes computers better at guessing private pictures.
Visual Cues of Gender and Race are Associated with Stereotyping in Vision-Language Models
CV and Pattern Recognition
Makes AI tell more uniform stories about women.