Explainable AI to Improve Machine Learning Reliability for Industrial Cyber-Physical Systems
By: Annemarie Jutte, Uraz Odyurt
Potential Business Impact:
Makes smart factory machines more reliable.
Industrial Cyber-Physical Systems (CPS) are sensitive infrastructure from both safety and economics perspectives, making their reliability critically important. Machine Learning (ML), specifically deep learning, is increasingly integrated in industrial CPS, but the inherent complexity of ML models results in non-transparent operation. Rigorous evaluation is needed to prevent models from exhibiting unexpected behaviour on future, unseen data. Explainable AI (XAI) can be used to uncover model reasoning, allowing a more extensive analysis of behaviour. We apply XAI to to improve predictive performance of ML models intended for industrial CPS. We analyse the effects of components from time-series data decomposition on model predictions using SHAP values. Through this method, we observe evidence on the lack of sufficient contextual information during model training. By increasing the window size of data instances, informed by the XAI findings, we are able to improve model performance.
Similar Papers
Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
CV and Pattern Recognition
Finds bad AI guesses in pictures of power lines.
Explainable AI: Learning from the Learners
Artificial Intelligence
AI learns how it learns, helping us discover more.
From Black Box to Insight: Explainable AI for Extreme Event Preparedness
Machine Learning (CS)
Helps predict wildfires so we can prepare.