Score: 1

Runtime Safety Monitoring of Deep Neural Networks for Perception: A Survey

Published: November 8, 2025 | arXiv ID: 2511.05982v1

By: Albert Schotschneider, Svetlana Pavlitska, J. Marius Zöllner

Potential Business Impact:

Keeps self-driving cars safe from errors.

Business Areas:
Image Recognition Data and Analytics, Software

Deep neural networks (DNNs) are widely used in perception systems for safety-critical applications, such as autonomous driving and robotics. However, DNNs remain vulnerable to various safety concerns, including generalization errors, out-of-distribution (OOD) inputs, and adversarial attacks, which can lead to hazardous failures. This survey provides a comprehensive overview of runtime safety monitoring approaches, which operate in parallel to DNNs during inference to detect these safety concerns without modifying the DNN itself. We categorize existing methods into three main groups: Monitoring inputs, internal representations, and outputs. We analyze the state-of-the-art for each category, identify strengths and limitations, and map methods to the safety concerns they address. In addition, we highlight open challenges and future research directions.

Country of Origin
🇩🇪 Germany

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition