Communication-Efficient Distributed Asynchronous ADMM
By: Sagar Shrestha
Potential Business Impact:
Shrinks data to speed up computer learning.
In distributed optimization and federated learning, asynchronous alternating direction method of multipliers (ADMM) serves as an attractive option for large-scale optimization, data privacy, straggler nodes and variety of objective functions. However, communication costs can become a major bottleneck when the nodes have limited communication budgets or when the data to be communicated is prohibitively large. In this work, we propose introducing coarse quantization to the data to be exchanged in aynchronous ADMM so as to reduce communication overhead for large-scale federated learning and distributed optimization applications. We experimentally verify the convergence of the proposed method for several distributed learning tasks, including neural networks.
Similar Papers
Jointly Computation- and Communication-Efficient Distributed Learning
Machine Learning (CS)
Makes computers learn together faster and with less data.
Learning to accelerate distributed ADMM using graph neural networks
Machine Learning (CS)
Learns faster ways for computers to solve big problems.
Asynchronous Distributed Multi-Robot Motion Planning Under Imperfect Communication
Robotics
Helps robots work together even with bad signals.