Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
By: Emmanouil Kritharakis , Dusan Jakovetic , Antonios Makris and more
Potential Business Impact:
Keeps AI learning safe from bad data.
Federated Learning (FL) enables collaborative model training across multiple clients without sharing private data. We consider FL scenarios wherein FL clients are subject to adversarial (Byzantine) attacks, while the FL server is trusted (honest) and has a trustworthy side dataset. This may correspond to, e.g., cases where the server possesses trusted data prior to federation, or to the presence of a trusted client that temporarily assumes the server role. Our approach requires only two honest participants, i.e., the server and one client, to function effectively, without prior knowledge of the number of malicious clients. Theoretical analysis demonstrates bounded optimality gaps even under strong Byzantine attacks. Experimental results show that our algorithm significantly outperforms standard and robust FL baselines such as Mean, Trimmed Mean, Median, Krum, and Multi-Krum under various attack strategies including label flipping, sign flipping, and Gaussian noise addition across MNIST, FMNIST, and CIFAR-10 benchmarks using the Flower framework.
Similar Papers
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Protects smart learning from bad data.
Byzantine-Robust Federated Learning with Learnable Aggregation Weights
Machine Learning (CS)
Keeps smart learning safe from bad guys.
Fairness-Constrained Optimization Attack in Federated Learning
Machine Learning (CS)
Makes AI unfairly biased, even when it seems accurate.