Conservative Perception Models for Probabilistic Verification
By: Matthew Cleaveland , Pengyuan Lu , Oleg Sokolsky and more
Potential Business Impact:
Makes self-driving cars safer by checking their "eyes."
Verifying the behaviors of autonomous systems with learned perception components is a challenging problem due to the complexity of the perception and the uncertainty of operating environments. Probabilistic model checking is a powerful tool for providing guarantees on stochastic models of systems. However, constructing model-checkable models of black-box perception components for system-level mathematical guarantees has been an enduring challenge. In this paper, we propose a method for constructing provably conservative Interval Markov Decision Process (IMDP) models of closed-loop systems with perception components. We prove that our technique results in conservative abstractions with a user-specified probability. We evaluate our approach in an automatic braking case study using both a synthetic perception component and the object detector YOLO11 in the CARLA driving simulator.
Similar Papers
Towards Unified Probabilistic Verification and Validation of Vision-Based Autonomy
Systems and Control
Makes self-driving cars safer in new places.
Robust Model Predictive Control Design for Autonomous Vehicles with Perception-based Observers
Robotics
Makes robots safer by understanding bad sensor data.
Scenario-based Compositional Verification of Autonomous Systems with Neural Perception
Machine Learning (CS)
Makes self-driving cars safer in changing weather.