Score: 0

A Fundamental Convergence Rate Bound for Gradient Based Online Optimization Algorithms with Exact Tracking

Published: August 29, 2025 | arXiv ID: 2508.21335v2

By: Alex Xinting Wu, Ian R. Petersen, Iman Shames

Potential Business Impact:

Helps computers find the best answer faster.

Business Areas:
A/B Testing Data and Analytics

In this paper, we consider algorithms with integral action for solving online optimization problems characterized by quadratic cost functions with a time-varying optimal point described by an $(n-1)$th order polynomial. Using a version of the internal model principle, the optimization algorithms under consideration are required to incorporate a discrete time $n$-th order integrator in order to achieve exact tracking. By using results on an optimal gain margin problem, we obtain a fundamental convergence rate bound for the class of linear gradient based algorithms exactly tracking a time-varying optimal point. This convergence rate bound is given by $ \left(\frac{\sqrt{\kappa} - 1 }{\sqrt{\kappa} + 1}\right)^{\frac{1}{n}}$, where $\kappa$ is the condition number for the set of cost functions under consideration. Using our approach, we also construct algorithms which achieve the optimal convergence rate as well as zero steady-state error when tracking a time-varying optimal point.

Country of Origin
🇦🇺 Australia

Page Count
13 pages

Category
Mathematics:
Optimization and Control