AesBiasBench: Evaluating Bias and Alignment in Multimodal Language Models for Personalized Image Aesthetic Assessment
By: Kun Li , Lai-Man Po , Hongzheng Yang and more
Potential Business Impact:
Finds if AI unfairly judges pictures based on who made them.
Multimodal Large Language Models (MLLMs) are increasingly applied in Personalized Image Aesthetic Assessment (PIAA) as a scalable alternative to expert evaluations. However, their predictions may reflect subtle biases influenced by demographic factors such as gender, age, and education. In this work, we propose AesBiasBench, a benchmark designed to evaluate MLLMs along two complementary dimensions: (1) stereotype bias, quantified by measuring variations in aesthetic evaluations across demographic groups; and (2) alignment between model outputs and genuine human aesthetic preferences. Our benchmark covers three subtasks (Aesthetic Perception, Assessment, Empathy) and introduces structured metrics (IFD, NRD, AAS) to assess both bias and alignment. We evaluate 19 MLLMs, including proprietary models (e.g., GPT-4o, Claude-3.5-Sonnet) and open-source models (e.g., InternVL-2.5, Qwen2.5-VL). Results indicate that smaller models exhibit stronger stereotype biases, whereas larger models align more closely with human preferences. Incorporating identity information often exacerbates bias, particularly in emotional judgments. These findings underscore the importance of identity-aware evaluation frameworks in subjective vision-language tasks.
Similar Papers
Automated Evaluation of Gender Bias Across 13 Large Multimodal Models
CV and Pattern Recognition
Finds AI makes unfair pictures of jobs.
Bias in the Picture: Benchmarking VLMs with Social-Cue News Images and LLM-as-Judge Assessment
CV and Pattern Recognition
Finds and fixes unfairness in AI that sees and reads.
Breaking the Benchmark: Revealing LLM Bias via Minimal Contextual Augmentation
Computation and Language
Makes AI less likely to be unfair or biased.