On Trustworthy Rule-Based Models and Explanations
By: Mohamed Siala, Jordi Planes, Joao Marques-Silva
Potential Business Impact:
Fixes confusing computer rules to make them clearer.
A task of interest in machine learning (ML) is that of ascribing explanations to the predictions made by ML models. Furthermore, in domains deemed high risk, the rigor of explanations is paramount. Indeed, incorrect explanations can and will mislead human decision makers. As a result, and even if interpretability is acknowledged as an elusive concept, so-called interpretable models are employed ubiquitously in high-risk uses of ML and data mining (DM). This is the case for rule-based ML models, which encompass decision trees, diagrams, sets and lists. This paper relates explanations with well-known undesired facets of rule-based ML models, which include negative overlap and several forms of redundancy. The paper develops algorithms for the analysis of these undesired facets of rule-based systems, and concludes that well-known and widely used tools for learning rule-based ML models will induce rule sets that exhibit one or more negative facets.
Similar Papers
Beware of "Explanations" of AI
Machine Learning (CS)
Makes AI explanations safer and more trustworthy.
Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness
Machine Learning (CS)
Makes AI decisions easier to trust.
How can we trust opaque systems? Criteria for robust explanations in XAI
Machine Learning (CS)
Makes smart computer guesses understandable and trustworthy.