Score: 1

Learning to optimize with guarantees: a complete characterization of linearly convergent algorithms

Published: August 1, 2025 | arXiv ID: 2508.00775v1

By: Andrea Martin, Ian R. Manchester, Luca Furieri

Potential Business Impact:

Speeds up math solving without breaking worst-case safety.

In high-stakes engineering applications, optimization algorithms must come with provable worst-case guarantees over a mathematically defined class of problems. Designing for the worst case, however, inevitably sacrifices performance on the specific problem instances that often occur in practice. We address the problem of augmenting a given linearly convergent algorithm to improve its average-case performance on a restricted set of target problems - for example, tailoring an off-the-shelf solver for model predictive control (MPC) for an application to a specific dynamical system - while preserving its worst-case guarantees across the entire problem class. Toward this goal, we characterize the class of algorithms that achieve linear convergence for classes of nonsmooth composite optimization problems. In particular, starting from a baseline linearly convergent algorithm, we derive all - and only - the modifications to its update rule that maintain its convergence properties. Our results apply to augmenting legacy algorithms such as gradient descent for nonconvex, gradient-dominated functions; Nesterov's accelerated method for strongly convex functions; and projected methods for optimization over polyhedral feasibility sets. We showcase effectiveness of the approach on solving optimization problems with tight iteration budgets in application to ill-conditioned systems of linear equations and MPC for linear systems.

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΈπŸ‡ͺ πŸ‡¦πŸ‡Ί Sweden, Australia, United Kingdom

Page Count
9 pages

Category
Electrical Engineering and Systems Science:
Systems and Control