Score: 1

Temporal Variabilities Limit Convergence Rates in Gradient-Based Online Optimization

Published: October 14, 2025 | arXiv ID: 2510.12512v1

By: Bryan Van Scoy, Gianluca Bianchin

Potential Business Impact:

Makes computers learn faster when things change.

Business Areas:
A/B Testing Data and Analytics

This paper investigates the fundamental performance limits of gradient-based algorithms for time-varying optimization. Leveraging the internal model principle and root locus techniques, we show that temporal variabilities impose intrinsic limits on the achievable rate of convergence. For a problem with condition ratio $\kappa$ and time variation whose model has degree $n$, we show that the worst-case convergence rate of any minimal-order gradient-based algorithm is $\rho_\text{TV} = (\frac{\kappa-1}{\kappa+1})^{1/n}$. This bound reveals a fundamental tradeoff between problem conditioning, temporal complexity, and rate of convergence. We further construct explicit controllers that attain the bound for low-degree models of time variation.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡§πŸ‡ͺ Belgium, United States

Page Count
7 pages

Category
Mathematics:
Optimization and Control