Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees
By: Xinhang Ma , Junlin Wu , Hussein Sibai and more
Potential Business Impact:
Makes self-driving cars safely learn from cameras.
Ensuring safety in autonomous systems with vision-based control remains a critical challenge due to the high dimensionality of image inputs and the fact that the relationship between true system state and its visual manifestation is unknown. Existing methods for learning-based control in such settings typically lack formal safety guarantees. To address this challenge, we introduce a novel semi-probabilistic verification framework that integrates reachability analysis with conditional generative adversarial networks and distribution-free tail bounds to enable efficient and scalable verification of vision-based neural network controllers. Next, we develop a gradient-based training approach that employs a novel safety loss function, safety-aware data-sampling strategy to efficiently select and store critical training examples, and curriculum learning, to efficiently synthesize safe controllers in the semi-probabilistic framework. Empirical evaluations in X-Plane 11 airplane landing simulation, CARLA-simulated autonomous lane following, and F1Tenth lane following in a physical visually-rich miniature environment demonstrate the effectiveness of our method in achieving formal safety guarantees while maintaining strong nominal performance. Our code is available at https://github.com/xhOwenMa/SPVT.
Similar Papers
How Safe Will I Be Given What I Saw? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy
Robotics
Makes robots safer by predicting future actions.
Learning Verifiable Control Policies Using Relaxed Verification
Systems and Control
Makes robots safer by checking them while learning.
Learning Safe Control via On-the-Fly Bandit Exploration
Robotics
Keeps robots safe while they learn new things.