Resource-Aware Aggregation and Sparsification in Heterogeneous Ensemble Federated Learning
By: Keumseo Ryum, Jinu Gong, Joonhyuk Kang
Potential Business Impact:
Helps many computers train together without sharing secrets.
Federated learning (FL) enables distributed training with private client data, but its convergence is hindered by system heterogeneity under realistic communication scenarios. Most FL schemes addressing system heterogeneity utilize global pruning or ensemble distillation, yet often overlook typical constraints required for communication efficiency. Meanwhile, deep ensembles can aggregate predictions from individually trained models to improve performance, but current ensemble-based FL methods fall short in fully capturing diversity of model predictions. In this work, we propose \textbf{SHEFL}, a global ensemble-based FL framework suited for clients with diverse computational capacities. We allocate different numbers of global models to clients based on their available resources. We introduce a novel aggregation scheme that mitigates the training bias between clients and dynamically adjusts the sparsification ratio across clients to reduce the computational burden of training deep ensembles. Extensive experiments demonstrate that our method effectively addresses computational heterogeneity, significantly improving accuracy and stability compared to existing approaches.
Similar Papers
SHEFL: Resource-Aware Aggregation and Sparsification in Heterogeneous Ensemble Federated Learning
Machine Learning (CS)
Helps many computers learn together without sharing private info.
Lightweight Federated Learning in Mobile Edge Computing with Statistical and Device Heterogeneity Awareness
Systems and Control
Makes phones learn together without sharing private data.
SHeRL-FL: When Representation Learning Meets Split Learning in Hierarchical Federated Learning
Machine Learning (CS)
Trains AI faster with less data sent.