Assessing reliability of explanations in unbalanced datasets: a use-case on the occurrence of frost events
By: Ilaria Vascotto , Valentina Blasone , Alex Rodriguez and more
Potential Business Impact:
Makes AI explanations trustworthy, even with rare events.
The usage of eXplainable Artificial Intelligence (XAI) methods has become essential in practical applications, given the increasing deployment of Artificial Intelligence (AI) models and the legislative requirements put forward in the latest years. A fundamental but often underestimated aspect of the explanations is their robustness, a key property that should be satisfied in order to trust the explanations. In this study, we provide some preliminary insights on evaluating the reliability of explanations in the specific case of unbalanced datasets, which are very frequent in high-risk use-cases, but at the same time considerably challenging for both AI models and XAI methods. We propose a simple evaluation focused on the minority class (i.e. the less frequent one) that leverages on-manifold generation of neighbours, explanation aggregation and a metric to test explanation consistency. We present a use-case based on a tabular dataset with numerical features focusing on the occurrence of frost events.
Similar Papers
Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness
Machine Learning (CS)
Makes AI decisions easier to trust.
Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
CV and Pattern Recognition
Finds bad AI guesses in pictures of power lines.
Explainable AI-Based Interface System for Weather Forecasting Model
Artificial Intelligence
Helps weather forecasters trust computer predictions.