Robust Permissive Controller Synthesis for Interval MDPs
By: Khang Vo Huynh, David Parker, Lu Feng
Potential Business Impact:
Helps robots make safe choices when unsure.
We address the problem of robust permissive controller synthesis for robots operating under uncertain dynamics, modeled as Interval Markov Decision Processes (IMDPs). IMDPs generalize standard MDPs by allowing transition probabilities to vary within intervals, capturing epistemic uncertainty from sensing noise, actuation imprecision, and coarse system abstractions-common in robotics. Traditional controller synthesis typically yields a single deterministic strategy, limiting adaptability. In contrast, permissive controllers (multi-strategies) allow multiple actions per state, enabling runtime flexibility and resilience. However, prior work on permissive controller synthesis generally assumes exact transition probabilities, which is unrealistic in many robotic applications. We present the first framework for robust permissive controller synthesis on IMDPs, guaranteeing that all strategies compliant with the synthesized multi-strategy satisfy reachability or reward-based specifications under all admissible transitions. We formulate the problem as mixed-integer linear programs (MILPs) and propose two encodings: a baseline vertex-enumeration method and a scalable duality-based method that avoids explicit enumeration. Experiments on four benchmark domains show that both methods synthesize robust, maximally permissive controllers and scale to large IMDPs with up to hundreds of thousands of states.
Similar Papers
Beyond Interval MDPs: Tight and Efficient Abstractions of Stochastic Systems
Systems and Control
Makes robots safer by predicting their actions.
Data-Driven Abstraction and Synthesis for Stochastic Systems with Unknown Dynamics
Systems and Control
Teaches robots to learn and follow rules.
Data-Driven Yet Formal Policy Synthesis for Stochastic Nonlinear Dynamical Systems
Systems and Control
Teaches robots to control tricky machines reliably.