How I Met Your Bias: Investigating Bias Amplification in Diffusion Models
By: Nathan Roos , Ekaterina Iakovleva , Ani Gjergji and more
Diffusion-based generative models demonstrate state-of-the-art performance across various image synthesis tasks, yet their tendency to replicate and amplify dataset biases remains poorly understood. Although previous research has viewed bias amplification as an inherent characteristic of diffusion models, this work provides the first analysis of how sampling algorithms and their hyperparameters influence bias amplification. We empirically demonstrate that samplers for diffusion models -- commonly optimized for sample quality and speed -- have a significant and measurable effect on bias amplification. Through controlled studies with models trained on Biased MNIST, Multi-Color MNIST and BFFHQ, and with Stable Diffusion, we show that sampling hyperparameters can induce both bias reduction and amplification, even when the trained model is fixed. Source code is available at https://github.com/How-I-met-your-bias/how_i_met_your_bias.
Similar Papers
Dissecting and Mitigating Diffusion Bias via Mechanistic Interpretability
CV and Pattern Recognition
Fixes AI art to be fair and unbiased.
Harnessing Diffusion-Generated Synthetic Images for Fair Image Classification
CV and Pattern Recognition
Makes AI fairer by fixing biased training pictures.
Exploring Bias in over 100 Text-to-Image Generative Models
CV and Pattern Recognition
Finds how AI art tools become unfair over time.