FedSUM Family: Efficient Federated Learning Methods under Arbitrary Client Participation
By: Runze You, Shi Pu
Federated Learning (FL) methods are often designed for specific client participation patterns, limiting their applicability in practical deployments. We introduce the FedSUM family of algorithms, which supports arbitrary client participation without additional assumptions on data heterogeneity. Our framework models participation variability with two delay metrics, the maximum delay $τ_{\max}$ and the average delay $τ_{\text{avg}}$. The FedSUM family comprises three variants: FedSUM-B (basic version), FedSUM (standard version), and FedSUM-CR (communication-reduced version). We provide unified convergence guarantees demonstrating the effectiveness of our approach across diverse participation patterns, thereby broadening the applicability of FL in real-world scenarios.
Similar Papers
FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning
Machine Learning (CS)
Helps computers learn together without sharing secrets.
Efficient Federated Learning with Timely Update Dissemination
Distributed, Parallel, and Cluster Computing
Improves AI learning from many phones faster.
Optimization Methods and Software for Federated Learning
Machine Learning (CS)
Helps many phones learn together safely.