Score: 0

Toward Malicious Clients Detection in Federated Learning

Published: May 14, 2025 | arXiv ID: 2505.09110v2

By: Zhihao Dou , Jiaqi Wang , Wei Sun and more

Potential Business Impact:

Finds bad guys in computer learning teams.

Business Areas:
Fraud Detection Financial Services, Payments, Privacy and Security

Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model without sharing their raw data. However, the decentralized nature of FL introduces vulnerabilities, particularly to poisoning attacks, where malicious clients manipulate their local models to disrupt the training process. While Byzantine-robust aggregation rules have been developed to mitigate such attacks, they remain inadequate against more advanced threats. In response, recent advancements have focused on FL detection techniques to identify potentially malicious participants. Unfortunately, these methods often misclassify numerous benign clients as threats or rely on unrealistic assumptions about the server's capabilities. In this paper, we propose a novel algorithm, SafeFL, specifically designed to accurately identify malicious clients in FL. The SafeFL approach involves the server collecting a series of global models to generate a synthetic dataset, which is then used to distinguish between malicious and benign models based on their behavior. Extensive testing demonstrates that SafeFL outperforms existing methods, offering superior efficiency and accuracy in detecting malicious clients.

Page Count
22 pages

Category
Computer Science:
Cryptography and Security