On the MIA Vulnerability Gap Between Private GANs and Diffusion Models
By: Ilana Sebag , Jean-Yves Franceschi , Alain Rakotomamonjy and more
Potential Business Impact:
Makes AI art safer from spying.
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis. While both can be trained under differential privacy (DP) to protect sensitive data, their sensitivity to membership inference attacks (MIAs), a key threat to data confidentiality, remains poorly understood. In this work, we present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models. We begin by showing, through a stability-based analysis, that GANs exhibit fundamentally lower sensitivity to data perturbations than diffusion models, suggesting a structural advantage in resisting MIAs. We then validate this insight with a comprehensive empirical study using a standardized MIA pipeline to evaluate privacy leakage across datasets and privacy budgets. Our results consistently reveal a marked privacy robustness gap in favor of GANs, even in strong DP regimes, highlighting that model type alone can critically shape privacy leakage.
Similar Papers
The Hidden Cost of Modeling P(X): Vulnerability to Membership Inference Attacks in Generative Text Classifiers
Cryptography and Security
Protects private data used to train AI.
Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble
Machine Learning (CS)
Finds hidden privacy leaks in smart computer programs.
Exposing and Defending Membership Leakage in Vulnerability Prediction Models
Cryptography and Security
Protects code-writing AI from spying on its training data.