Communication-Efficient Federated Learning with Adaptive Number of Participants
By: Sergey Skorik , Vladislav Dorofeev , Gleb Molodtsov and more
Potential Business Impact:
Chooses best clients to train AI faster.
Rapid scaling of deep learning models has enabled performance gains across domains, yet it introduced several challenges. Federated Learning (FL) has emerged as a promising framework to address these concerns by enabling decentralized training. Nevertheless, communication efficiency remains a key bottleneck in FL, particularly under heterogeneous and dynamic client participation. Existing methods, such as FedAvg and FedProx, or other approaches, including client selection strategies, attempt to mitigate communication costs. However, the problem of choosing the number of clients in a training round remains extremely underexplored. We introduce Intelligent Selection of Participants (ISP), an adaptive mechanism that dynamically determines the optimal number of clients per round to enhance communication efficiency without compromising model accuracy. We validate the effectiveness of ISP across diverse setups, including vision transformers, real-world ECG classification, and training with gradient compression. Our results show consistent communication savings of up to 30\% without losing the final quality. Applying ISP to different real-world ECG classification setups highlighted the selection of the number of clients as a separate task of federated learning.
Similar Papers
An Adaptive Clustering Scheme for Client Selections in Communication-Efficient Federated Learning
Machine Learning (CS)
Smartly groups users to train computers faster.
Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization
Distributed, Parallel, and Cluster Computing
Makes phones train AI without sharing private info.
Communication-Efficient Device Scheduling for Federated Learning Using Lyapunov Optimization
Machine Learning (CS)
Makes smart devices learn faster without sharing data.