Score: 2

Runtime Backdoor Detection for Federated Learning via Representational Dissimilarity Analysis

Published: March 6, 2025 | arXiv ID: 2503.04473v1

By: Xiyue Zhang , Xiaoyong Xue , Xiaoning Du and more

Potential Business Impact:

Finds bad guys messing up computer learning.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Federated learning (FL), as a powerful learning paradigm, trains a shared model by aggregating model updates from distributed clients. However, the decoupling of model learning from local data makes FL highly vulnerable to backdoor attacks, where a single compromised client can poison the shared model. While recent progress has been made in backdoor detection, existing methods face challenges with detection accuracy and runtime effectiveness, particularly when dealing with complex model architectures. In this work, we propose a novel approach to detecting malicious clients in an accurate, stable, and efficient manner. Our method utilizes a sampling-based network representation method to quantify dissimilarities between clients, identifying model deviations caused by backdoor injections. We also propose an iterative algorithm to progressively detect and exclude malicious clients as outliers based on these dissimilarity measurements. Evaluations across a range of benchmark tasks demonstrate that our approach outperforms state-of-the-art methods in detection accuracy and defense effectiveness. When deployed for runtime protection, our approach effectively eliminates backdoor injections with marginal overheads.

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ πŸ‡¦πŸ‡Ί Singapore, China, United Kingdom, Australia

Page Count
19 pages

Category
Computer Science:
Cryptography and Security