Enhanced Privacy Leakage from Noise-Perturbed Gradients via Gradient-Guided Conditional Diffusion Models
By: Jiayang Meng , Tao Huang , Hong Chen and more
Potential Business Impact:
Steals private pictures from computer learning.
Federated learning synchronizes models through gradient transmission and aggregation. However, these gradients pose significant privacy risks, as sensitive training data is embedded within them. Existing gradient inversion attacks suffer from significantly degraded reconstruction performance when gradients are perturbed by noise-a common defense mechanism. In this paper, we introduce Gradient-Guided Conditional Diffusion Models (GG-CDMs) for reconstructing private images from leaked gradients without prior knowledge of the target data distribution. Our approach leverages the inherent denoising capability of diffusion models to circumvent the partial protection offered by noise perturbation, thereby improving attack performance under such defenses. We further provide a theoretical analysis of the reconstruction error bounds and the convergence properties of attack loss, characterizing the impact of key factors-such as noise magnitude and attacked model architecture-on reconstruction quality. Extensive experiments demonstrate our attack's superior reconstruction performance with Gaussian noise-perturbed gradients, and confirm our theoretical findings.
Similar Papers
Enhanced Privacy Leakage from Noise-Perturbed Gradients via Gradient-Guided Conditional Diffusion Models
Cryptography and Security
Steals private pictures from shared computer learning.
Towards Understanding Generalization in DP-GD: A Case Study in Training Two-Layer CNNs
Machine Learning (Stat)
Keeps private data safe while computers learn.
GUIDE: Enhancing Gradient Inversion Attacks in Federated Learning with Denoising Models
Cryptography and Security
Lets hackers steal private pictures from training data.