What's Really Different with AI? -- A Behavior-based Perspective on System Safety for Automated Driving Systems
By: Marcus Nolte , Nayel Fabian Salem , Olaf Franke and more
Potential Business Impact:
Helps make self-driving cars safer to use.
Assuring safety for ``AI-based'' systems is one of the current challenges in safety engineering. For automated driving systems, in particular, further assurance challenges result from the open context that the systems need to operate in after deployment. The current standardization and regulation landscape for ``AI-based'' systems is becoming ever more complex, as standards and regulations are being released at high frequencies. This position paper seeks to provide guidance for making qualified arguments which standards should meaningfully be applied to (``AI-based'') automated driving systems. Furthermore, we argue for clearly differentiating sources of risk between AI-specific and general uncertainties related to the open context. In our view, a clear conceptual separation can help to exploit commonalities that can close the gap between system-level and AI-specific safety analyses, while ensuring the required rigor for engineering safe ``AI-based'' systems.
Similar Papers
AI Safety Assurance for Automated Vehicles: A Survey on Research, Standardization, Regulation
Computers and Society
Makes self-driving cars safer to use.
AI Safety is Stuck in Technical Terms -- A System Safety Response to the International AI Safety Report
Computers and Society
Makes AI safer by looking at all its parts.
Safety is Essential for Responsible Open-Ended Systems
Artificial Intelligence
AI learns new things but can become unpredictable.