Learning Verifiable Control Policies Using Relaxed Verification
By: Puja Chaudhury, Alexander Estornell, Michael Everett
Potential Business Impact:
Makes robots safer by checking them while learning.
To provide safety guarantees for learning-based control systems, recent work has developed formal verification methods to apply after training ends. However, if the trained policy does not meet the specifications, or there is conservatism in the verification algorithm, establishing these guarantees may not be possible. Instead, this work proposes to perform verification throughout training to ultimately aim for policies whose properties can be evaluated throughout runtime with lightweight, relaxed verification algorithms. The approach is to use differentiable reachability analysis and incorporate new components into the loss function. Numerical experiments on a quadrotor model and unicycle model highlight the ability of this approach to lead to learned control policies that satisfy desired reach-avoid and invariance specifications.
Similar Papers
Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees
Robotics
Makes self-driving cars safely learn from cameras.
Certifying Stability of Reinforcement Learning Policies using Generalized Lyapunov Functions
Machine Learning (CS)
Makes smart robots safer by checking their moves.
Formal Verification of Noisy Quantum Reinforcement Learning Policies
Quantum Physics
Checks if quantum computers act safely with errors.