Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
By: Zhiyong Jin , Runhua Xu , Chao Li and more
Potential Business Impact:
Protects smart learning from bad data.
Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy, yet it faces significant challenges in communication efficiency and vulnerability to poisoning attacks. While sparsification techniques mitigate communication overhead by transmitting only critical model parameters, they inadvertently amplify security risks: adversarial clients can exploit sparse updates to evade detection and degrade model performance. Existing defense mechanisms, designed for standard FL communication scenarios, are ineffective in addressing these vulnerabilities within sparsified FL. To bridge this gap, we propose FLARE, a novel federated learning framework that integrates sparse index mask inspection and model update sign similarity analysis to detect and mitigate poisoning attacks in sparsified FL. Extensive experiments across multiple datasets and adversarial scenarios demonstrate that FLARE significantly outperforms existing defense strategies, effectively securing sparsified FL against poisoning attacks while maintaining communication efficiency.
Similar Papers
Harnessing Sparsification in Federated Learning: A Secure, Efficient, and Differentially Private Realization
Cryptography and Security
Makes AI learn faster and safer from private data.
SparsyFed: Sparse Adaptive Federated Training
Machine Learning (CS)
Trains AI on phones faster and with less data.
Stealth by Conformity: Evading Robust Aggregation through Adaptive Poisoning
Cryptography and Security
Tricks AI into learning wrong things secretly.