Runtime Backdoor Detection for Federated Learning via Representational Dissimilarity Analysis
By: Xiyue Zhang , Xiaoyong Xue , Xiaoning Du and more
Potential Business Impact:
Finds bad guys messing up computer learning.
Federated learning (FL), as a powerful learning paradigm, trains a shared model by aggregating model updates from distributed clients. However, the decoupling of model learning from local data makes FL highly vulnerable to backdoor attacks, where a single compromised client can poison the shared model. While recent progress has been made in backdoor detection, existing methods face challenges with detection accuracy and runtime effectiveness, particularly when dealing with complex model architectures. In this work, we propose a novel approach to detecting malicious clients in an accurate, stable, and efficient manner. Our method utilizes a sampling-based network representation method to quantify dissimilarities between clients, identifying model deviations caused by backdoor injections. We also propose an iterative algorithm to progressively detect and exclude malicious clients as outliers based on these dissimilarity measurements. Evaluations across a range of benchmark tasks demonstrate that our approach outperforms state-of-the-art methods in detection accuracy and defense effectiveness. When deployed for runtime protection, our approach effectively eliminates backdoor injections with marginal overheads.
Similar Papers
Enhancing the Effectiveness and Durability of Backdoor Attacks in Federated Learning through Maximizing Task Distinction
Machine Learning (CS)
Makes secret computer tricks harder to find.
TrojanDam: Detection-Free Backdoor Defense in Federated Learning through Proactive Model Robustification utilizing OOD Data
Cryptography and Security
Stops bad data from tricking smart computer programs.
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning
Machine Learning (CS)
Makes AI models easier to trick with bad data.