Improving Compactness and Reducing Ambiguity of CFIRE Rule-Based Explanations
By: Sebastian Müller , Tobias Schneider , Ruben Kemna and more
Potential Business Impact:
Makes computer decisions easier to understand.
Models trained on tabular data are widely used in sensitive domains, increasing the demand for explanation methods to meet transparency needs. CFIRE is a recent algorithm in this domain that constructs compact surrogate rule models from local explanations. While effective, CFIRE may assign rules associated with different classes to the same sample, introducing ambiguity. We investigate this ambiguity and propose a post-hoc pruning strategy that removes rules with low contribution or conflicting coverage, yielding smaller and less ambiguous models while preserving fidelity. Experiments across multiple datasets confirm these improvements with minimal impact on predictive performance.
Similar Papers
CFIRE: A General Method for Combining Local Explanations
Machine Learning (CS)
Makes AI decisions easy to understand.
FIRE: Faithful Interpretable Recommendation Explanations
Information Retrieval
Explains why you get certain movie suggestions.
On Trustworthy Rule-Based Models and Explanations
Artificial Intelligence
Fixes confusing computer rules to make them clearer.