Learning to accelerate distributed ADMM using graph neural networks
By: Henri Doerks , Paul Häusner , Daniel Hernández Escobar and more
Potential Business Impact:
Learns faster ways for computers to solve big problems.
Distributed optimization is fundamental in large-scale machine learning and control applications. Among existing methods, the Alternating Direction Method of Multipliers (ADMM) has gained popularity due to its strong convergence guarantees and suitability for decentralized computation. However, ADMM often suffers from slow convergence and sensitivity to hyperparameter choices. In this work, we show that distributed ADMM iterations can be naturally represented within the message-passing framework of graph neural networks (GNNs). Building on this connection, we propose to learn adaptive step sizes and communication weights by a graph neural network that predicts the hyperparameters based on the iterates. By unrolling ADMM for a fixed number of iterations, we train the network parameters end-to-end to minimize the final iterates error for a given problem class, while preserving the algorithm's convergence properties. Numerical experiments demonstrate that our learned variant consistently improves convergence speed and solution quality compared to standard ADMM. The code is available at https://github.com/paulhausner/learning-distributed-admm.
Similar Papers
Communication-Efficient Distributed Asynchronous ADMM
Machine Learning (CS)
Shrinks data to speed up computer learning.
Jointly Computation- and Communication-Efficient Distributed Learning
Machine Learning (CS)
Makes computers learn together faster and with less data.
ADMM-Based Training for Spiking Neural Networks
Machine Learning (CS)
Teaches brain-like computers to learn faster.