On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy
By: Zijian Wang , Wei Tong , Tingxuan Han and more
Potential Business Impact:
Makes private computer learning safer from bad guys.
Federated learning (FL) combined with local differential privacy (LDP) enables privacy-preserving model training across decentralized data sources. However, the decentralized data-management paradigm leaves LDPFL vulnerable to participants with malicious intent. The robustness of LDPFL protocols, particularly against model poisoning attacks (MPA), where adversaries inject malicious updates to disrupt global model convergence, remains insufficiently studied. In this paper, we propose a novel and extensible model poisoning attack framework tailored for LDPFL settings. Our approach is driven by the objective of maximizing the global training loss while adhering to local privacy constraints. To counter robust aggregation mechanisms such as Multi-Krum and trimmed mean, we develop adaptive attacks that embed carefully crafted constraints into a reverse training process, enabling evasion of these defenses. We evaluate our framework across three representative LDPFL protocols, three benchmark datasets, and two types of deep neural networks. Additionally, we investigate the influence of data heterogeneity and privacy budgets on attack effectiveness. Experimental results demonstrate that our adaptive attacks can significantly degrade the performance of the global model, revealing critical vulnerabilities and highlighting the need for more robust LDPFL defense strategies against MPA. Our code is available at https://github.com/ZiJW/LDPFL-Attack
Similar Papers
Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy
Cryptography and Security
Keeps private data safe while training AI.
Strategic Incentivization for Locally Differentially Private Federated Learning
Machine Learning (CS)
Helps protect privacy without hurting computer learning.
Mitigating Data Poisoning Attacks to Local Differential Privacy
Cryptography and Security
Protects private data from bad guys trying to cheat.