Distributed Optimization and Learning for Automated Stepsize Selection with Finite Time Coordination
By: Apostolos I. Rikos , Nicola Bastianello , Themistoklis Charalambous and more
Potential Business Impact:
Makes computers learn faster and more accurately.
Distributed optimization and learning algorithms are designed to operate over large scale networks enabling processing of vast amounts of data effectively and efficiently. One of the main challenges for ensuring a smooth learning process in gradient-based methods is the appropriate selection of a learning stepsize. Most current distributed approaches let individual nodes adapt their stepsizes locally. However, this may introduce stepsize heterogeneity in the network, thus disrupting the learning process and potentially leading to divergence. In this paper, we propose a distributed learning algorithm that incorporates a novel mechanism for automating stepsize selection among nodes. Our main idea relies on implementing a finite time coordination algorithm for eliminating stepsize heterogeneity among nodes. We analyze the operation of our algorithm and we establish its convergence to the optimal solution. We conclude our paper with numerical simulations for a linear regression problem, showcasing that eliminating stepsize heterogeneity enhances convergence speed and accuracy against current approaches.
Similar Papers
Fully Adaptive Stepsizes: Which System Benefit More -- Centralized or Decentralized?
Optimization and Control
Helps computers learn faster by adjusting their own learning speed.
Adaptive control mechanisms in gradient descent algorithms
Optimization and Control
Makes computer learning faster and more accurate.
Delay-Tolerant Augmented-Consensus-based Distributed Directed Optimization
Systems and Control
Fixes slow computer networks for faster learning.