FedADP: Unified Model Aggregation for Federated Learning with Heterogeneous Model Architectures
By: Jiacheng Wang, Hongtao Lv, Lei Liu
Potential Business Impact:
Lets different computers learn together better.
Traditional Federated Learning (FL) faces significant challenges in terms of efficiency and accuracy, particularly in heterogeneous environments where clients employ diverse model architectures and have varying computational resources. Such heterogeneity complicates the aggregation process, leading to performance bottlenecks and reduced model generalizability. To address these issues, we propose FedADP, a federated learning framework designed to adapt to client heterogeneity by dynamically adjusting model architectures during aggregation. FedADP enables effective collaboration among clients with differing capabilities, maximizing resource utilization and ensuring model quality. Our experimental results demonstrate that FedADP significantly outperforms existing methods, such as FlexiFed, achieving an accuracy improvement of up to 23.30%, thereby enhancing model adaptability and training efficiency in heterogeneous real-world settings.
Similar Papers
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Machine Learning (CS)
Makes AI learn faster without mistakes.
FedAPM: Federated Learning via ADMM with Partial Model Personalization
Machine Learning (CS)
Helps AI learn better from different people's phones.
Robust Federated Learning on Edge Devices with Domain Heterogeneity
Machine Learning (CS)
Helps AI learn from many computers without seeing private data.