Hidden Bias in the Machine: Stereotypes in Text-to-Image Models
By: Sedat Porikli, Vedat Porikli
Potential Business Impact:
Shows how AI pictures can show unfair ideas.
Text-to-Image (T2I) models have transformed visual content creation, producing highly realistic images from natural language prompts. However, concerns persist around their potential to replicate and magnify existing societal biases. To investigate these issues, we curated a diverse set of prompts spanning thematic categories such as occupations, traits, actions, ideologies, emotions, family roles, place descriptions, spirituality, and life events. For each of the 160 unique topics, we crafted multiple prompt variations to reflect a wide range of meanings and perspectives. Using Stable Diffusion 1.5 (UNet-based) and Flux-1 (DiT-based) models with original checkpoints, we generated over 16,000 images under consistent settings. Additionally, we collected 8,000 comparison images from Google Image Search. All outputs were filtered to exclude abstract, distorted, or nonsensical results. Our analysis reveals significant disparities in the representation of gender, race, age, somatotype, and other human-centric factors across generated images. These disparities often mirror and reinforce harmful stereotypes embedded in societal narratives. We discuss the implications of these findings and emphasize the need for more inclusive datasets and development practices to foster fairness in generative visual systems.
Similar Papers
Prompting Away Stereotypes? Evaluating Bias in Text-to-Image Models for Occupations
Computation and Language
Makes AI art show different kinds of people.
T2IBias: Uncovering Societal Bias Encoded in the Latent Space of Text-to-Image Generative Models
Machine Learning (CS)
AI makes pictures show unfair stereotypes.
Text-to-Image Models and Their Representation of People from Different Nationalities Engaging in Activities
CV and Pattern Recognition
AI shows people wearing old clothes, not real life.