Modular Distributed Nonconvex Learning with Error Feedback
By: Guido Carnevale, Nicola Bastianello
Potential Business Impact:
Makes computers learn faster with less data.
In this paper, we design a novel distributed learning algorithm using stochastic compressed communications. In detail, we pursue a modular approach, merging ADMM and a gradient-based approach, benefiting from the robustness of the former and the computational efficiency of the latter. Additionally, we integrate a stochastic integral action (error feedback) enabling almost sure rejection of the compression error. We analyze the resulting method in nonconvex scenarios and guarantee almost sure asymptotic convergence to the set of stationary points of the problem. This result is obtained using system-theoretic tools based on stochastic timescale separation. We corroborate our findings with numerical simulations in nonconvex classification.
Similar Papers
Accelerated Distributed Optimization with Compression and Error Feedback
Optimization and Control
Speeds up computer learning with less data sent.
Jointly Computation- and Communication-Efficient Distributed Learning
Machine Learning (CS)
Makes computers learn together faster and with less data.
Efficient Distributed Learning over Decentralized Networks with Convoluted Support Vector Machine
Machine Learning (Stat)
Teaches computers to learn from data faster.