Incorporating Failure of Machine Learning in Dynamic Probabilistic Safety Assurance
By: Razieh Arshadizadeh , Mahmoud Asgari , Zeinab Khosravi and more
Potential Business Impact:
Makes self-driving cars safer by checking their "thinking."
Machine Learning (ML) models are increasingly integrated into safety-critical systems, such as autonomous vehicle platooning, to enable real-time decision-making. However, their inherent imperfection introduces a new class of failure: reasoning failures often triggered by distributional shifts between operational and training data. Traditional safety assessment methods, which rely on design artefacts or code, are ill-suited for ML components that learn behaviour from data. SafeML was recently proposed to dynamically detect such shifts and assign confidence levels to the reasoning of ML-based components. Building on this, we introduce a probabilistic safety assurance framework that integrates SafeML with Bayesian Networks (BNs) to model ML failures as part of a broader causal safety analysis. This allows for dynamic safety evaluation and system adaptation under uncertainty. We demonstrate the approach on an simulated automotive platooning system with traffic sign recognition. The findings highlight the potential broader benefits of explicitly modelling ML failures in safety assessment.
Similar Papers
Enhancing Robot Safety via MLLM-Based Semantic Interpretation of Failure Data
Robotics
Helps robots learn from mistakes automatically.
Safe Physics-Informed Machine Learning for Dynamics and Control
Systems and Control
Makes robots and cars safer using smart math.
Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model
Computation and Language
Makes AI safer by teaching it to think before acting.