Communication-Efficient Zero-Order and First-Order Federated Learning Methods over Wireless Networks
By: Mohamad Assaad, Zeinab Nehme, Merouane Debbah
Potential Business Impact:
Makes phones learn together without sharing secrets.
Federated Learning (FL) is an emerging learning framework that enables edge devices to collaboratively train ML models without sharing their local data. FL faces, however, a significant challenge due to the high amount of information that must be exchanged between the devices and the aggregator in the training phase, which can exceed the limited capacity of wireless systems. In this paper, two communication-efficient FL methods are considered where communication overhead is reduced by communicating scalar values instead of long vectors and by allowing high number of users to send information simultaneously. The first approach employs a zero-order optimization technique with two-point gradient estimator, while the second involves a first-order gradient computation strategy. The novelty lies in leveraging channel information in the learning algorithms, eliminating hence the need for additional resources to acquire channel state information (CSI) and to remove its impact, as well as in considering asynchronous devices. We provide a rigorous analytical framework for the two methods, deriving convergence guarantees and establishing appropriate performance bounds.
Similar Papers
Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization
Distributed, Parallel, and Cluster Computing
Makes phones train AI without sharing private info.
Optimal Batch-Size Control for Low-Latency Federated Learning with Device Heterogeneity
Machine Learning (CS)
Makes smart devices learn faster, privately.
Caching Techniques for Reducing the Communication Cost of Federated Learning in IoT Environments
Distributed, Parallel, and Cluster Computing
Smarter sharing makes AI learn faster, cheaper.