Learning-Based Shrinking Disturbance-Invariant Tubes for State- and Input-Dependent Uncertainty
By: Abdelrahman Ramadan, Sidney Givigi
Potential Business Impact:
Makes robots safely learn from mistakes.
We develop a learning-based framework for constructing shrinking disturbance-invariant tubes under state- and input-dependent uncertainty, intended as a building block for tube Model Predictive Control (MPC), and certify safety via a lifted, isotone (order-preserving) fixed-point map. Gaussian Process (GP) posteriors become $(1-α)$ credible ellipsoids, then polytopic outer sets for deterministic set operations. A two-time-scale scheme separates learning epochs, where these polytopes are frozen, from an inner, outside-in iteration that converges to a compact fixed point $Z^\star\!\subseteq\!\mathcal G$; its state projection is RPI for the plant. As data accumulate, disturbance polytopes tighten, and the associated tubes nest monotonically, resolving the circular dependence between the set to be verified and the disturbance model while preserving hard constraints. A double-integrator study illustrates shrinking tube cross-sections in data-rich regions while maintaining invariance.
Similar Papers
Learning-based Homothetic Tube MPC
Systems and Control
Teaches robots to learn and fix their own mistakes.
Learning-Based Conformal Tube MPC for Safe Control in Interactive Multi-Agent Systems
Systems and Control
Keeps robots safe when they don't know what others will do.
Robust Multi-Agent Safety via Tube-Based Tightened Exponential Barrier Functions
Systems and Control
Keeps robots safe when working together.