An Optimistic Gradient Tracking Method for Distributed Minimax Optimization
By: Yan Huang , Jinming Xu , Jiming Chen and more
Potential Business Impact:
Helps many computers work together faster.
This paper studies the distributed minimax optimization problem over networks. To enhance convergence performance, we propose a distributed optimistic gradient tracking method, termed DOGT, which solves a surrogate function that captures the similarity between local objective functions to approximate a centralized optimistic approach locally. Leveraging a Lyapunov-based analysis, we prove that DOGT achieves linear convergence to the optimal solution for strongly convex-strongly concave objective functions while remaining robust to the heterogeneity among them. Moreover, by integrating an accelerated consensus protocol, the accelerated DOGT (ADOGT) algorithm achieves an optimal convergence rate of $\mathcal{O} \left( \kappa \log \left( \epsilon ^{-1} \right) \right)$ and communication complexity of $\mathcal{O} \left( \kappa \log \left( \epsilon ^{-1} \right) /\sqrt{1-\sqrt{\rho _W}} \right)$ for a suboptimality level of $\epsilon>0$, where $\kappa$ is the condition number of the objective function and $\rho_W$ is the spectrum gap of the network. Numerical experiments illustrate the effectiveness of the proposed algorithms.
Similar Papers
Distributed Optimization with Gradient Tracking over Heterogeneous Delay-Prone Directed Networks
Systems and Control
Fixes computer networks with slow connections.
Multi-cluster distributed optimization in open multi-agent systems over directed graphs with acknowledgement messages
Systems and Control
Helps robots work together even when they join/leave.
Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Optimization and Control
Solves big problems by breaking them into smaller ones.