Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
By: Giovanni Floreale , Piero Baraldi , Enrico Zio and more
Potential Business Impact:
Finds bad AI guesses in pictures of power lines.
Deep Learning (DL) models processing images to recognize the health state of large infrastructure components can exhibit biases and rely on non-causal shortcuts. eXplainable Artificial Intelligence (XAI) can address these issues but manually analyzing explanations generated by XAI techniques is time-consuming and prone to errors. This work proposes a novel framework that combines post-hoc explanations with semi-supervised learning to automatically identify anomalous explanations that deviate from those of correctly classified images and may therefore indicate model abnormal behaviors. This significantly reduces the workload for maintenance decision-makers, who only need to manually reclassify images flagged as having anomalous explanations. The proposed framework is applied to drone-collected images of insulator shells for power grid infrastructure monitoring, considering two different Convolutional Neural Networks (CNNs), GradCAM explanations and Deep Semi-Supervised Anomaly Detection. The average classification accuracy on two faulty classes is improved by 8% and maintenance operators are required to manually reclassify only 15% of the images. We compare the proposed framework with a state-of-the-art approach based on the faithfulness metric: the experimental results obtained demonstrate that the proposed framework consistently achieves F_1 scores larger than those of the faithfulness-based approach. Additionally, the proposed framework successfully identifies correct classifications that result from non-causal shortcuts, such as the presence of ID tags printed on insulator shells.
Similar Papers
Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview
CV and Pattern Recognition
Shows doctors why computers think images are sick.
Explaining What Machines See: XAI Strategies in Deep Object Detection Models
CV and Pattern Recognition
Shows how smart computers "see" to make them trustworthy.
Evaluating explainable AI for deep learning-based network intrusion detection system alert classification
Cryptography and Security
Helps computers find cyber threats faster.