FedGMR: Federated Learning with Gradual Model Restoration under Asynchrony and Model Heterogeneity
By: Chengjie Ma , Seungeun Oh , Jihong Park and more
Federated learning (FL) holds strong potential for distributed machine learning, but in heterogeneous environments, Bandwidth-Constrained Clients (BCCs) often struggle to participate effectively due to limited communication capacity. Their small sub-models learn quickly at first but become under-parameterized in later stages, leading to slow convergence and degraded generalization. We propose FedGMR - Federated Learning with Gradual Model Restoration under Asynchrony and Model Heterogeneity. FedGMR progressively increases each client's sub-model density during training, enabling BCCs to remain effective contributors throughout the process. In addition, we develop a mask-aware aggregation rule tailored for asynchronous MHFL and provide convergence guarantees showing that aggregated error scales with the average sub-model density across clients and rounds, while GMR provably shrinks this gap toward full-model FL. Extensive experiments on FEMNIST, CIFAR-10, and ImageNet-100 demonstrate that FedGMR achieves faster convergence and higher accuracy, especially under high heterogeneity and non-IID settings.
Similar Papers
Knowledge-Driven Federated Graph Learning on Model Heterogeneity
Machine Learning (CS)
Lets different computers learn together safely.
HFedATM: Hierarchical Federated Domain Generalization via Optimal Transport and Regularized Mean Aggregation
Machine Learning (CS)
Helps AI learn from many devices without sharing data.
Federated Gaussian Mixture Models
Machine Learning (CS)
Lets phones learn together without sharing secrets.