Score: 1

Data-driven learning of feedback maps for explicit robust predictive control: an approximation theoretic view

Published: October 15, 2025 | arXiv ID: 2510.13522v1

By: Siddhartha Ganguly, Shubham Gupta, Debasish Chatterjee

Potential Business Impact:

Teaches robots to make smart choices safely.

Business Areas:
Simulation Software

We establish an algorithm to learn feedback maps from data for a class of robust model predictive control (MPC) problems. The algorithm accounts for the approximation errors due to the learning directly at the synthesis stage, ensuring recursive feasibility by construction. The optimal control problem consists of a linear noisy dynamical system, a quadratic stage and quadratic terminal costs as the objective, and convex constraints on the state, control, and disturbance sequences; the control minimizes and the disturbance maximizes the objective. We proceed via two steps -- (a) Data generation: First, we reformulate the given minmax problem into a convex semi-infinite program and employ recently developed tools to solve it in an exact fashion on grid points of the state space to generate (state, action) data. (b) Learning approximate feedback maps: We employ a couple of approximation schemes that furnish tight approximations within preassigned uniform error bounds on the admissible state space to learn the unknown feedback policy. The stability of the closed-loop system under the approximate feedback policies is also guaranteed under a standard set of hypotheses. Two benchmark numerical examples are provided to illustrate the results.

Country of Origin
🇯🇵 🇮🇳 🇨🇭 India, Japan, Switzerland

Page Count
27 pages

Category
Mathematics:
Optimization and Control