Runtime Safety Monitoring of Deep Neural Networks for Perception: A Survey
By: Albert Schotschneider, Svetlana Pavlitska, J. Marius Zöllner
Potential Business Impact:
Keeps self-driving cars safe from errors.
Deep neural networks (DNNs) are widely used in perception systems for safety-critical applications, such as autonomous driving and robotics. However, DNNs remain vulnerable to various safety concerns, including generalization errors, out-of-distribution (OOD) inputs, and adversarial attacks, which can lead to hazardous failures. This survey provides a comprehensive overview of runtime safety monitoring approaches, which operate in parallel to DNNs during inference to detect these safety concerns without modifying the DNN itself. We categorize existing methods into three main groups: Monitoring inputs, internal representations, and outputs. We analyze the state-of-the-art for each category, identify strengths and limitations, and map methods to the safety concerns they address. In addition, we highlight open challenges and future research directions.
Similar Papers
Safety Monitoring for Learning-Enabled Cyber-Physical Systems in Out-of-Distribution Scenarios
Machine Learning (CS)
Keeps smart machines safe from unexpected problems.
Synthesis of Deep Neural Networks with Safe Robust Adaptive Control for Reliable Operation of Wheeled Mobile Robots
Robotics
Keeps big robots safe even when things go wrong.
Revisiting Evaluation of Deep Neural Networks for Pedestrian Detection
CV and Pattern Recognition
Helps self-driving cars spot people better.