Aligned but Stereotypical? The Hidden Influence of System Prompts on Social Bias in LVLM-Based Text-to-Image Models
By: NaHyeon Park , Namin An , Kunhee Kim and more
Potential Business Impact:
Fixes AI art to be less biased.
Large vision-language model (LVLM) based text-to-image (T2I) systems have become the dominant paradigm in image generation, yet whether they amplify social biases remains insufficiently understood. In this paper, we show that LVLM-based models produce markedly more socially biased images than non-LVLM-based models. We introduce a 1,024 prompt benchmark spanning four levels of linguistic complexity and evaluate demographic bias across multiple attributes in a systematic manner. Our analysis identifies system prompts, the predefined instructions guiding LVLMs, as a primary driver of biased behavior. Through decoded intermediate representations, token-probability diagnostics, and embedding-association analyses, we reveal how system prompts encode demographic priors that propagate into image synthesis. To this end, we propose FairPro, a training-free meta-prompting framework that enables LVLMs to self-audit and construct fairness-aware system prompts at test time. Experiments on two LVLM-based T2I models, SANA and Qwen-Image, show that FairPro substantially reduces demographic bias while preserving text-image alignment. We believe our findings provide deeper insight into the central role of system prompts in bias propagation and offer a practical, deployable approach for building more socially responsible T2I systems.
Similar Papers
Using LLMs as prompt modifier to avoid biases in AI image generators
Computation and Language
Makes AI art show more kinds of people.
T2IBias: Uncovering Societal Bias Encoded in the Latent Space of Text-to-Image Generative Models
Machine Learning (CS)
AI makes pictures show unfair stereotypes.
Prompting Away Stereotypes? Evaluating Bias in Text-to-Image Models for Occupations
Computation and Language
Makes AI art show different kinds of people.