Score: 0

On Trustworthy Rule-Based Models and Explanations

Published: July 10, 2025 | arXiv ID: 2507.07576v1

By: Mohamed Siala, Jordi Planes, Joao Marques-Silva

Potential Business Impact:

Fixes confusing computer rules to make them clearer.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

A task of interest in machine learning (ML) is that of ascribing explanations to the predictions made by ML models. Furthermore, in domains deemed high risk, the rigor of explanations is paramount. Indeed, incorrect explanations can and will mislead human decision makers. As a result, and even if interpretability is acknowledged as an elusive concept, so-called interpretable models are employed ubiquitously in high-risk uses of ML and data mining (DM). This is the case for rule-based ML models, which encompass decision trees, diagrams, sets and lists. This paper relates explanations with well-known undesired facets of rule-based ML models, which include negative overlap and several forms of redundancy. The paper develops algorithms for the analysis of these undesired facets of rule-based systems, and concludes that well-known and widely used tools for learning rule-based ML models will induce rule sets that exhibit one or more negative facets.

Page Count
21 pages

Category
Computer Science:
Artificial Intelligence