Temporal Variabilities Limit Convergence Rates in Gradient-Based Online Optimization
By: Bryan Van Scoy, Gianluca Bianchin
Potential Business Impact:
Makes computers learn faster when things change.
This paper investigates the fundamental performance limits of gradient-based algorithms for time-varying optimization. Leveraging the internal model principle and root locus techniques, we show that temporal variabilities impose intrinsic limits on the achievable rate of convergence. For a problem with condition ratio $\kappa$ and time variation whose model has degree $n$, we show that the worst-case convergence rate of any minimal-order gradient-based algorithm is $\rho_\text{TV} = (\frac{\kappa-1}{\kappa+1})^{1/n}$. This bound reveals a fundamental tradeoff between problem conditioning, temporal complexity, and rate of convergence. We further construct explicit controllers that attain the bound for low-degree models of time variation.
Similar Papers
A Fundamental Convergence Rate Bound for Gradient Based Online Optimization Algorithms with Exact Tracking
Optimization and Control
Helps computers find the best answer faster.
A Fundamental Convergence Rate Bound for Gradient Based Online Optimization Algorithms with Exact Tracking
Optimization and Control
Helps computers find the best answer faster.
Time-Varying Optimization for Streaming Data Via Temporal Weighting
Machine Learning (CS)
Learns from changing information to make better choices.