Score: 1

An Optimistic Gradient Tracking Method for Distributed Minimax Optimization

Published: August 29, 2025 | arXiv ID: 2508.21431v1

By: Yan Huang , Jinming Xu , Jiming Chen and more

Potential Business Impact:

Helps many computers work together faster.

Business Areas:
A/B Testing Data and Analytics

This paper studies the distributed minimax optimization problem over networks. To enhance convergence performance, we propose a distributed optimistic gradient tracking method, termed DOGT, which solves a surrogate function that captures the similarity between local objective functions to approximate a centralized optimistic approach locally. Leveraging a Lyapunov-based analysis, we prove that DOGT achieves linear convergence to the optimal solution for strongly convex-strongly concave objective functions while remaining robust to the heterogeneity among them. Moreover, by integrating an accelerated consensus protocol, the accelerated DOGT (ADOGT) algorithm achieves an optimal convergence rate of $\mathcal{O} \left( \kappa \log \left( \epsilon ^{-1} \right) \right)$ and communication complexity of $\mathcal{O} \left( \kappa \log \left( \epsilon ^{-1} \right) /\sqrt{1-\sqrt{\rho _W}} \right)$ for a suboptimality level of $\epsilon>0$, where $\kappa$ is the condition number of the objective function and $\rho_W$ is the spectrum gap of the network. Numerical experiments illustrate the effectiveness of the proposed algorithms.

Country of Origin
πŸ‡ΈπŸ‡ͺ πŸ‡¨πŸ‡³ China, Sweden

Page Count
8 pages

Category
Mathematics:
Optimization and Control