Score: 0

Learning-Based Shrinking Disturbance-Invariant Tubes for State- and Input-Dependent Uncertainty

Published: January 16, 2026 | arXiv ID: 2601.11426v1

By: Abdelrahman Ramadan, Sidney Givigi

Potential Business Impact:

Makes robots safely learn from mistakes.

Business Areas:
Industrial Automation Manufacturing, Science and Engineering

We develop a learning-based framework for constructing shrinking disturbance-invariant tubes under state- and input-dependent uncertainty, intended as a building block for tube Model Predictive Control (MPC), and certify safety via a lifted, isotone (order-preserving) fixed-point map. Gaussian Process (GP) posteriors become $(1-α)$ credible ellipsoids, then polytopic outer sets for deterministic set operations. A two-time-scale scheme separates learning epochs, where these polytopes are frozen, from an inner, outside-in iteration that converges to a compact fixed point $Z^\star\!\subseteq\!\mathcal G$; its state projection is RPI for the plant. As data accumulate, disturbance polytopes tighten, and the associated tubes nest monotonically, resolving the circular dependence between the set to be verified and the disturbance model while preserving hard constraints. A double-integrator study illustrates shrinking tube cross-sections in data-rich regions while maintaining invariance.

Country of Origin
🇨🇦 Canada

Page Count
6 pages

Category
Electrical Engineering and Systems Science:
Systems and Control