Score: 2

Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning

Published: April 30, 2025 | arXiv ID: 2505.01454v4

By: Zhiyong Jin , Runhua Xu , Chao Li and more

Potential Business Impact:

Protects smart learning from bad data.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy, yet it faces significant challenges in communication efficiency and vulnerability to poisoning attacks. While sparsification techniques mitigate communication overhead by transmitting only critical model parameters, they inadvertently amplify security risks: adversarial clients can exploit sparse updates to evade detection and degrade model performance. Existing defense mechanisms, designed for standard FL communication scenarios, are ineffective in addressing these vulnerabilities within sparsified FL. To bridge this gap, we propose FLARE, a novel federated learning framework that integrates sparse index mask inspection and model update sign similarity analysis to detect and mitigate poisoning attacks in sparsified FL. Extensive experiments across multiple datasets and adversarial scenarios demonstrate that FLARE significantly outperforms existing defense strategies, effectively securing sparsified FL against poisoning attacks while maintaining communication efficiency.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ United States, China

Page Count
13 pages

Category
Computer Science:
Cryptography and Security