Score: 0

On a closed-loop identification challenge in feedback optimization

Published: September 1, 2025 | arXiv ID: 2509.01188v1

By: Kristian Lindbäck Løvland, Lars Struen Imsland, Bjarne Grimstad

Potential Business Impact:

Makes computer learning more reliable with bad data.

Business Areas:
A/B Testing Data and Analytics

Feedback optimization has emerged as an effective strategy for steady-state optimization of dynamical systems. By exploiting models of the steady-state input-output sensitivity, methods of this type are often sample efficient, and their use of feedback ensures that they are robust against model error. Still, this robustness has its limitations, and the dependence on a model may hinder convergence in settings with high model error. We investigate here the effect of a particular type of model error: bias due to identifying the model from closed-loop data. Our main results are a sufficient convergence condition, and a converse divergence condition. The convergence condition requires a matrix which depends on the closed-loop sensitivity and a noise-to-signal ratio of the data generating system to be positive definite. The negative definiteness of the same matrix characterizes an extreme case where the bias due to closed-loop data results in divergence of model-based feedback optimization.

Country of Origin
🇳🇴 Norway

Page Count
7 pages

Category
Mathematics:
Optimization and Control