Agile Temporal Discretization for Symbolic Optimal Control
By: Adrien Janssens , Adrien Banse , Julien Calbert and more
Potential Business Impact:
Makes robots learn faster with flexible timing.
As control systems grow in complexity, abstraction-based methods have become essential for designing controllers with formal guarantees. However, a key limitation of these methods is their reliance on discrete-time models, typically obtained by discretizing continuous-time systems with a fixed timestep. This discretization leads to two major problems: when the timestep is small, the abstraction includes numerous stuttering and spurious trajectories, making controller synthesis suboptimal or even infeasible; conversely, a large time step may also render control design infeasible due to a lack of flexibility. In this work, drawing inspiration from Reinforcement Learning concepts, we introduce temporal abstractions, which allow for a flexible timestep. We provide a method for constructing such abstractions and formally establish their correctness in controller design. Furthermore we show how to apply these to optimal control under reachability specifications. Finally we showcase our methods on two numerical examples, highlighting that our approach leads to controllers that achieve a lower worst-case control cost.
Similar Papers
Scalable and Approximation-free Symbolic Control for Unknown Euler-Lagrange Systems
Systems and Control
Makes robots move safely without perfect instructions.
Conformal Data-driven Control of Stochastic Multi-Agent Systems under Collaborative Signal Temporal Logic Specifications
Systems and Control
Helps robots work together safely, even with surprises.
Switched Systems Control via Discreteness-Promoting Regularization
Optimization and Control
Makes computer controls choose the best options faster.