AutoDebias: Automated Framework for Debiasing Text-to-Image Models
By: Hongyi Cai , Mohammad Mahdinur Rahman , Mingkang Dong and more
Potential Business Impact:
Fixes AI art to show everyone fairly.
Text-to-Image (T2I) models generate high-quality images from text prompts but often exhibit unintended social biases, such as gender or racial stereotypes, even when these attributes are not mentioned. Existing debiasing methods work well for simple or well-known cases but struggle with subtle or overlapping biases. We propose AutoDebias, a framework that automatically identifies and mitigates harmful biases in T2I models without prior knowledge of specific bias types. Specifically, AutoDebias leverages vision-language models to detect biased visual patterns and constructs fairness guides by generating inclusive alternative prompts that reflect balanced representations. These guides drive a CLIP-guided training process that promotes fairer outputs while preserving the original model's image quality and diversity. Unlike existing methods, AutoDebias effectively addresses both subtle stereotypes and multiple interacting biases. We evaluate the framework on a benchmark covering over 25 bias scenarios, including challenging cases where multiple biases occur simultaneously. AutoDebias detects harmful patterns with 91.6% accuracy and reduces biased outputs from 90% to negligible levels, while preserving the visual fidelity of the original model.
Similar Papers
Fully Unsupervised Self-debiasing of Text-to-Image Diffusion Models
CV and Pattern Recognition
Makes AI art fairer and less biased.
Hidden Bias in the Machine: Stereotypes in Text-to-Image Models
CV and Pattern Recognition
Shows how AI pictures can show unfair ideas.
FairImagen: Post-Processing for Bias Mitigation in Text-to-Image Models
Machine Learning (CS)
Makes AI art fair for everyone.