T2IBias: Uncovering Societal Bias Encoded in the Latent Space of Text-to-Image Generative Models
By: Abu Sufian , Cosimo Distante , Marco Leo and more
Potential Business Impact:
AI makes pictures show unfair stereotypes.
Text-to-image (T2I) generative models are largely used in AI-powered real-world applications and value creation. However, their strategic deployment raises critical concerns for responsible AI management, particularly regarding the reproduction and amplification of race- and gender-related stereotypes that can undermine organizational ethics. In this work, we investigate whether such societal biases are systematically encoded within the pretrained latent spaces of state-of-the-art T2I models. We conduct an empirical study across the five most popular open-source models, using ten neutral, profession-related prompts to generate 100 images per profession, resulting in a dataset of 5,000 images evaluated by diverse human assessors representing different races and genders. We demonstrate that all five models encode and amplify pronounced societal skew: caregiving and nursing roles are consistently feminized, while high-status professions such as corporate CEO, politician, doctor, and lawyer are overwhelmingly represented by males and mostly White individuals. We further identify model-specific patterns, such as QWEN-Image's near-exclusive focus on East Asian outputs, Kandinsky's dominance of White individuals, and SDXL's comparatively broader but still biased distributions. These results provide critical insights for AI project managers and practitioners, enabling them to select equitable AI models and customized prompts that generate images in alignment with the principles of responsible AI. We conclude by discussing the risks of these biases and proposing actionable strategies for bias mitigation in building responsible GenAI systems.
Similar Papers
Hidden Bias in the Machine: Stereotypes in Text-to-Image Models
CV and Pattern Recognition
Shows how AI pictures can show unfair ideas.
Prompting Away Stereotypes? Evaluating Bias in Text-to-Image Models for Occupations
Computation and Language
Makes AI art show different kinds of people.
Aligned but Stereotypical? The Hidden Influence of System Prompts on Social Bias in LVLM-Based Text-to-Image Models
CV and Pattern Recognition
Fixes AI art to be less biased.