Score: 0

Incorporating Failure of Machine Learning in Dynamic Probabilistic Safety Assurance

Published: June 7, 2025 | arXiv ID: 2506.06868v1

By: Razieh Arshadizadeh , Mahmoud Asgari , Zeinab Khosravi and more

Potential Business Impact:

Makes self-driving cars safer by checking their "thinking."

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine Learning (ML) models are increasingly integrated into safety-critical systems, such as autonomous vehicle platooning, to enable real-time decision-making. However, their inherent imperfection introduces a new class of failure: reasoning failures often triggered by distributional shifts between operational and training data. Traditional safety assessment methods, which rely on design artefacts or code, are ill-suited for ML components that learn behaviour from data. SafeML was recently proposed to dynamically detect such shifts and assign confidence levels to the reasoning of ML-based components. Building on this, we introduce a probabilistic safety assurance framework that integrates SafeML with Bayesian Networks (BNs) to model ML failures as part of a broader causal safety analysis. This allows for dynamic safety evaluation and system adaptation under uncertainty. We demonstrate the approach on an simulated automotive platooning system with traffic sign recognition. The findings highlight the potential broader benefits of explicitly modelling ML failures in safety assessment.

Page Count
15 pages

Category
Computer Science:
Artificial Intelligence