Differential Privacy: Gradient Leakage Attacks in Federated Learning Environments
By: Miguel Fernandez-de-Retana , Unai Zulaika , Rubén Sánchez-Corcuera and more
Potential Business Impact:
Protects private data when computers learn together.
Federated Learning (FL) allows for the training of Machine Learning models in a collaborative manner without the need to share sensitive data. However, it remains vulnerable to Gradient Leakage Attacks (GLAs), which can reveal private information from the shared model updates. In this work, we investigate the effectiveness of Differential Privacy (DP) mechanisms - specifically, DP-SGD and a variant based on explicit regularization (PDP-SGD) - as defenses against GLAs. To this end, we evaluate the performance of several computer vision models trained under varying privacy levels on a simple classification task, and then analyze the quality of private data reconstructions obtained from the intercepted gradients in a simulated FL environment. Our results demonstrate that DP-SGD significantly mitigates the risk of gradient leakage attacks, albeit with a moderate trade-off in model utility. In contrast, PDP-SGD maintains strong classification performance but proves ineffective as a practical defense against reconstruction attacks. These findings highlight the importance of empirically evaluating privacy mechanisms beyond their theoretical guarantees, particularly in distributed learning scenarios where information leakage may represent an unassumable critical threat to data security and privacy.
Similar Papers
Local Layer-wise Differential Privacy in Federated Learning
Cryptography and Security
Keeps AI learning private, better than before.
Optimal Strategies for Federated Learning Maintaining Client Privacy
Machine Learning (CS)
Makes private AI learning better without losing data.
An Interactive Framework for Implementing Privacy-Preserving Federated Learning: Experiments on Large Language Models
Machine Learning (CS)
Protects private data while training smart computer programs.