Accelerated Decentralized Constraint-Coupled Optimization: A Dual$^2$ Approach
By: Jingwang Li, Vincent Lau
Potential Business Impact:
Helps many computers work together faster.
In this paper, we focus on a class of decentralized constraint-coupled optimization problem: $\min_{x_i \in \mathbb{R}^{d_i}, i \in \mathcal{I}; y \in \mathbb{R}^p}$ $\sum_{i=1}^n\left(f_i(x_i) + g_i(x_i)\right) + h(y) \ \text{s.t.} \ \sum_{i=1}^{n}A_ix_i = y$, over an undirected and connected network of $n$ agents. Here, $f_i$, $g_i$, and $A_i$ represent private information of agent $i \in \mathcal{I} = \{1, \cdots, n\}$, while $h$ is public for all agents. Building on a novel dual$^2$ approach, we develop two accelerated algorithms to solve this problem: the inexact Dual$^2$ Accelerated (iD2A) gradient method and the Multi-consensus inexact Dual$^2$ Accelerated (MiD2A) gradient method. We demonstrate that both iD2A and MiD2A can guarantee asymptotic convergence under a milder condition on $h$ compared to existing algorithms. Furthermore, under additional assumptions, we establish linear convergence rates and derive significantly lower communication and computational complexity bounds than those of existing algorithms. Several numerical experiments validate our theoretical analysis and demonstrate the practical superiority of the proposed algorithms.
Similar Papers
Dual Acceleration for Minimax Optimization: Linear Convergence Under Relaxed Assumptions
Optimization and Control
Solves hard math problems faster and more reliably.
Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Optimization and Control
Solves big problems by breaking them into smaller ones.
Unrolled Graph Neural Networks for Constrained Optimization
Machine Learning (CS)
Teaches computers to solve hard problems better.