CFIRE: A General Method for Combining Local Explanations
By: Sebastian Müller , Vanessa Toborek , Tamás Horváth and more
Potential Business Impact:
Makes AI decisions easy to understand.
We propose a novel eXplainable AI algorithm to compute faithful, easy-to-understand, and complete global decision rules from local explanations for tabular data by combining XAI methods with closed frequent itemset mining. Our method can be used with any local explainer that indicates which dimensions are important for a given sample for a given black-box decision. This property allows our algorithm to choose among different local explainers, addressing the disagreement problem, \ie the observation that no single explanation method consistently outperforms others across models and datasets. Unlike usual experimental methodology, our evaluation also accounts for the Rashomon effect in model explainability. To this end, we demonstrate the robustness of our approach in finding suitable rules for nearly all of the 700 black-box models we considered across 14 benchmark datasets. The results also show that our method exhibits improved runtime, high precision and F1-score while generating compact and complete rules.
Similar Papers
FIRE: Faithful Interpretable Recommendation Explanations
Information Retrieval
Explains why you get certain movie suggestions.
Assessing reliability of explanations in unbalanced datasets: a use-case on the occurrence of frost events
Machine Learning (CS)
Makes AI explanations trustworthy, even with rare events.
Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
CV and Pattern Recognition
Finds bad AI guesses in pictures of power lines.