Score: 0

Byzantine-Robust Federated Learning Using Generative Adversarial Networks

Published: March 26, 2025 | arXiv ID: 2503.20884v3

By: Usama Zafar, André M. H. Teixeira, Salman Toor

Potential Business Impact:

Keeps AI learning safe from bad data.

Business Areas:
A/B Testing Data and Analytics

Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data, but its robustness is threatened by Byzantine behaviors such as data and model poisoning. Existing defenses face fundamental limitations: robust aggregation rules incur error lower bounds that grow with client heterogeneity, while detection-based methods often rely on heuristics (e.g., a fixed number of malicious clients) or require trusted external datasets for validation. We present a defense framework that addresses these challenges by leveraging a conditional generative adversarial network (cGAN) at the server to synthesize representative data for validating client updates. This approach eliminates reliance on external datasets, adapts to diverse attack strategies, and integrates seamlessly into standard FL workflows. Extensive experiments on benchmark datasets demonstrate that our framework accurately distinguishes malicious from benign clients while maintaining overall model accuracy. Beyond Byzantine robustness, we also examine the representativeness of synthesized data, computational costs of cGAN training, and the transparency and scalability of our approach.

Country of Origin
🇸🇪 Sweden

Page Count
22 pages

Category
Computer Science:
Cryptography and Security