Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications
By: Dimitrios Kritsiolis, Constantine Kotropoulos
Potential Business Impact:
Lets phones learn together without sharing secrets.
Federated learning is a machine learning approach that enables multiple devices (i.e., agents) to train a shared model cooperatively without exchanging raw data. This technique keeps data localized on user devices, ensuring privacy and security, while each agent trains the model on their own data and only shares model updates. The communication overhead is a significant challenge due to the frequent exchange of model updates between the agents and the central server. In this paper, we propose a communication-efficient federated learning scheme that utilizes low-rank approximation of neural network gradients and quantization to significantly reduce the network load of the decentralized learning process with minimal impact on the model's accuracy.
Similar Papers
Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks
Distributed, Parallel, and Cluster Computing
Trains AI faster with less data sent.
An Adaptive Clustering Scheme for Client Selections in Communication-Efficient Federated Learning
Machine Learning (CS)
Smartly groups users to train computers faster.
Trading-off Accuracy and Communication Cost in Federated Learning
Machine Learning (CS)
Makes AI learn faster with less data sent.