ProSh: Probabilistic Shielding for Model-free Reinforcement Learning
By: Edwin Hamel-De le Court, Gaspard Ohlmann, Francesco Belardinelli
Potential Business Impact:
Keeps robots from making dangerous mistakes.
Safety is a major concern in reinforcement learning (RL): we aim at developing RL systems that not only perform optimally, but are also safe to deploy by providing formal guarantees about their safety. To this end, we introduce Probabilistic Shielding via Risk Augmentation (ProSh), a model-free algorithm for safe reinforcement learning under cost constraints. ProSh augments the Constrained MDP state space with a risk budget and enforces safety by applying a shield to the agent's policy distribution using a learned cost critic. The shield ensures that all sampled actions remain safe in expectation. We also show that optimality is preserved when the environment is deterministic. Since ProSh is model-free, safety during training depends on the knowledge we have acquired about the environment. We provide a tight upper-bound on the cost in expectation, depending only on the backup-critic accuracy, that is always satisfied during training. Under mild, practically achievable assumptions, ProSh guarantees safety even at training time, as shown in the experiments.
Similar Papers
Probabilistic Shielding for Safe Reinforcement Learning
Machine Learning (Stat)
Keeps robots safe while they learn new tasks.
Predictive Safety Shield for Dyna-Q Reinforcement Learning
Machine Learning (CS)
Learns to be safe while still getting better.
Compositional shield synthesis for safe reinforcement learning in partial observability
Systems and Control
Keeps robots safe while learning new tasks.