Score: 1

Differential Privacy: Gradient Leakage Attacks in Federated Learning Environments

Published: October 27, 2025 | arXiv ID: 2510.23931v1

By: Miguel Fernandez-de-Retana , Unai Zulaika , Rubén Sánchez-Corcuera and more

Potential Business Impact:

Protects private data when computers learn together.

Business Areas:
Cloud Security Information Technology, Privacy and Security

Federated Learning (FL) allows for the training of Machine Learning models in a collaborative manner without the need to share sensitive data. However, it remains vulnerable to Gradient Leakage Attacks (GLAs), which can reveal private information from the shared model updates. In this work, we investigate the effectiveness of Differential Privacy (DP) mechanisms - specifically, DP-SGD and a variant based on explicit regularization (PDP-SGD) - as defenses against GLAs. To this end, we evaluate the performance of several computer vision models trained under varying privacy levels on a simple classification task, and then analyze the quality of private data reconstructions obtained from the intercepted gradients in a simulated FL environment. Our results demonstrate that DP-SGD significantly mitigates the risk of gradient leakage attacks, albeit with a moderate trade-off in model utility. In contrast, PDP-SGD maintains strong classification performance but proves ineffective as a practical defense against reconstruction attacks. These findings highlight the importance of empirically evaluating privacy mechanisms beyond their theoretical guarantees, particularly in distributed learning scenarios where information leakage may represent an unassumable critical threat to data security and privacy.

Country of Origin
🇪🇸 Spain

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)