A Fundamental Convergence Rate Bound for Gradient Based Online Optimization Algorithms with Exact Tracking
By: Alex Xinting Wu, Ian R. Petersen, Iman Shames
Potential Business Impact:
Helps computers find the best answer faster.
In this paper, we consider algorithms with integral action for solving online optimization problems characterized by quadratic cost functions with a time-varying optimal point described by an $(n-1)$th order polynomial. Using a version of the internal model principle, the optimization algorithms under consideration are required to incorporate a discrete time $n$-th order integrator in order to achieve exact tracking. By using results on an optimal gain margin problem, we obtain a fundamental convergence rate bound for the class of linear gradient based algorithms exactly tracking a time-varying optimal point. This convergence rate bound is given by $ \left(\frac{\sqrt{\kappa} - 1 }{\sqrt{\kappa} + 1}\right)^{\frac{1}{n}}$, where $\kappa$ is the condition number for the set of cost functions under consideration. Using our approach, we also construct algorithms which achieve the optimal convergence rate as well as zero steady-state error when tracking a time-varying optimal point.
Similar Papers
A Fundamental Convergence Rate Bound for Gradient Based Online Optimization Algorithms with Exact Tracking
Optimization and Control
Helps computers find the best answer faster.
Temporal Variabilities Limit Convergence Rates in Gradient-Based Online Optimization
Optimization and Control
Makes computers learn faster when things change.
On the Rate of Gaussian Approximation for Linear Regression Problems
Machine Learning (Stat)
Helps computers guess better with more data.