Cost-TrustFL: Cost-Aware Hierarchical Federated Learning with Lightweight Reputation Evaluation across Multi-Cloud
By: Jixiao Yang , Jinyu Chen , Zixiao Huang and more
Federated learning across multi-cloud environments faces critical challenges, including non-IID data distributions, malicious participant detection, and substantial cross-cloud communication costs (egress fees). Existing Byzantine-robust methods focus primarily on model accuracy while overlooking the economic implications of data transfer across cloud providers. This paper presents Cost-TrustFL, a hierarchical federated learning framework that jointly optimizes model performance and communication costs while providing robust defense against poisoning attacks. We propose a gradient-based approximate Shapley value computation method that reduces the complexity from exponential to linear, enabling lightweight reputation evaluation. Our cost-aware aggregation strategy prioritizes intra-cloud communication to minimize expensive cross-cloud data transfers. Experiments on CIFAR-10 and FEMNIST datasets demonstrate that Cost-TrustFL achieves 86.7% accuracy under 30% malicious clients while reducing communication costs by 32% compared to baseline methods. The framework maintains stable performance across varying non-IID degrees and attack intensities, making it practical for real-world multi-cloud deployments.
Similar Papers
Federated Learning Framework for Scalable AI in Heterogeneous HPC and Cloud Environments
Distributed, Parallel, and Cluster Computing
Trains AI on many computers without sharing private data.
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Protects smart learning from bad data.
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Keeps AI learning safe from bad data.