Score: 0

Distributed Optimization and Learning for Automated Stepsize Selection with Finite Time Coordination

Published: August 7, 2025 | arXiv ID: 2508.05887v1

By: Apostolos I. Rikos , Nicola Bastianello , Themistoklis Charalambous and more

Potential Business Impact:

Makes computers learn faster and more accurately.

Distributed optimization and learning algorithms are designed to operate over large scale networks enabling processing of vast amounts of data effectively and efficiently. One of the main challenges for ensuring a smooth learning process in gradient-based methods is the appropriate selection of a learning stepsize. Most current distributed approaches let individual nodes adapt their stepsizes locally. However, this may introduce stepsize heterogeneity in the network, thus disrupting the learning process and potentially leading to divergence. In this paper, we propose a distributed learning algorithm that incorporates a novel mechanism for automating stepsize selection among nodes. Our main idea relies on implementing a finite time coordination algorithm for eliminating stepsize heterogeneity among nodes. We analyze the operation of our algorithm and we establish its convergence to the optimal solution. We conclude our paper with numerical simulations for a linear regression problem, showcasing that eliminating stepsize heterogeneity enhances convergence speed and accuracy against current approaches.

Country of Origin
🇨🇾 Cyprus

Page Count
8 pages

Category
Electrical Engineering and Systems Science:
Systems and Control