Prompting Away Stereotypes? Evaluating Bias in Text-to-Image Models for Occupations
By: Shaina Raza , Maximus Powers , Partha Pratim Saha and more
Potential Business Impact:
Makes AI art show different kinds of people.
Text-to-Image (TTI) models are powerful creative tools but risk amplifying harmful social biases. We frame representational societal bias assessment as an image curation and evaluation task and introduce a pilot benchmark of occupational portrayals spanning five socially salient roles (CEO, Nurse, Software Engineer, Teacher, Athlete). Using five state-of-the-art models: closed-source (DALLE 3, Gemini Imagen 4.0) and open-source (FLUX.1-dev, Stable Diffusion XL Turbo, Grok-2 Image), we compare neutral baseline prompts against fairness-aware controlled prompts designed to encourage demographic diversity. All outputs are annotated for gender (male, female) and race (Asian, Black, White), enabling structured distributional analysis. Results show that prompting can substantially shift demographic representations, but with highly model-specific effects: some systems diversify effectively, others overcorrect into unrealistic uniformity, and some show little responsiveness. These findings highlight both the promise and the limitations of prompting as a fairness intervention, underscoring the need for complementary model-level strategies. We release all code and data for transparency and reproducibility https://github.com/maximus-powers/img-gen-bias-analysis.
Similar Papers
Hidden Bias in the Machine: Stereotypes in Text-to-Image Models
CV and Pattern Recognition
Shows how AI pictures can show unfair ideas.
Can we Debias Social Stereotypes in AI-Generated Images? Examining Text-to-Image Outputs and User Perceptions
Human-Computer Interaction
Fixes AI art to show less unfairness.
Exposing Hidden Biases in Text-to-Image Models via Automated Prompt Search
Machine Learning (CS)
Finds hidden unfairness in AI art.