Score: 1

Modular Distributed Nonconvex Learning with Error Feedback

Published: March 18, 2025 | arXiv ID: 2503.14055v2

By: Guido Carnevale, Nicola Bastianello

Potential Business Impact:

Makes computers learn faster with less data.

Business Areas:
E-Learning Education, Software

In this paper, we design a novel distributed learning algorithm using stochastic compressed communications. In detail, we pursue a modular approach, merging ADMM and a gradient-based approach, benefiting from the robustness of the former and the computational efficiency of the latter. Additionally, we integrate a stochastic integral action (error feedback) enabling almost sure rejection of the compression error. We analyze the resulting method in nonconvex scenarios and guarantee almost sure asymptotic convergence to the set of stationary points of the problem. This result is obtained using system-theoretic tools based on stochastic timescale separation. We corroborate our findings with numerical simulations in nonconvex classification.

Country of Origin
🇮🇹 🇸🇪 Sweden, Italy

Page Count
7 pages

Category
Mathematics:
Optimization and Control