OptiGradTrust: Byzantine-Robust Federated Learning with Multi-Feature Gradient Analysis and Reinforcement Learning-Based Trust Weighting
By: Mohammad Karami , Fatemeh Ghassemi , Hamed Kebriaei and more
Potential Business Impact:
Protects medical AI from bad data, improves accuracy.
Federated Learning (FL) enables collaborative model training across distributed medical institutions while preserving patient privacy, but remains vulnerable to Byzantine attacks and statistical heterogeneity. We present OptiGradTrust, a comprehensive defense framework that evaluates gradient updates through a novel six-dimensional fingerprint including VAE reconstruction error, cosine similarity metrics, $L_2$ norm, sign-consistency ratio, and Monte Carlo Shapley value, which drive a hybrid RL-attention module for adaptive trust scoring. To address convergence challenges under data heterogeneity, we develop FedBN-Prox (FedBN-P), combining Federated Batch Normalization with proximal regularization for optimal accuracy-convergence trade-offs. Extensive evaluation across MNIST, CIFAR-10, and Alzheimer's MRI datasets under various Byzantine attack scenarios demonstrates significant improvements over state-of-the-art defenses, achieving up to +1.6 percentage points over FLGuard under non-IID conditions while maintaining robust performance against diverse attack patterns through our adaptive learning approach.
Similar Papers
Byzantine-Robust Federated Learning with Learnable Aggregation Weights
Machine Learning (CS)
Keeps smart learning safe from bad guys.
ProDiGy: Proximity- and Dissimilarity-Based Byzantine-Robust Federated Learning
Machine Learning (CS)
Protects computer learning from bad data.
Cost-TrustFL: Cost-Aware Hierarchical Federated Learning with Lightweight Reputation Evaluation across Multi-Cloud
Machine Learning (CS)
Saves money on cloud data transfer.