On a closed-loop identification challenge in feedback optimization
By: Kristian Lindbäck Løvland, Lars Struen Imsland, Bjarne Grimstad
Potential Business Impact:
Makes computer learning more reliable with bad data.
Feedback optimization has emerged as an effective strategy for steady-state optimization of dynamical systems. By exploiting models of the steady-state input-output sensitivity, methods of this type are often sample efficient, and their use of feedback ensures that they are robust against model error. Still, this robustness has its limitations, and the dependence on a model may hinder convergence in settings with high model error. We investigate here the effect of a particular type of model error: bias due to identifying the model from closed-loop data. Our main results are a sufficient convergence condition, and a converse divergence condition. The convergence condition requires a matrix which depends on the closed-loop sensitivity and a noise-to-signal ratio of the data generating system to be positive definite. The negative definiteness of the same matrix characterizes an extreme case where the bias due to closed-loop data results in divergence of model-based feedback optimization.
Similar Papers
A constrained optimization approach to nonlinear system identification through simulation error minimization
Optimization and Control
Makes computer models learn faster and better.
A Hybrid Systems Model of Feedback Optimization for Linear Systems
Systems and Control
Fixes computer systems when things go wrong.
Closed-Form Input Design for Identification under Output Feedback with Perturbation Constraints
Systems and Control
Adds safe, small wiggles to make systems learn better.