CID: Measuring Feature Importance Through Counterfactual Distributions
By: Eddie Conti, Álvaro Parafita, Axel Brando
Potential Business Impact:
Shows why computers make certain choices.
Assessing the importance of individual features in Machine Learning is critical to understand the model's decision-making process. While numerous methods exist, the lack of a definitive ground truth for comparison highlights the need for alternative, well-founded measures. This paper introduces a novel post-hoc local feature importance method called Counterfactual Importance Distribution (CID). We generate two sets of positive and negative counterfactuals, model their distributions using Kernel Density Estimation, and rank features based on a distributional dissimilarity measure. This measure, grounded in a rigorous mathematical framework, satisfies key properties required to function as a valid metric. We showcase the effectiveness of our method by comparing with well-established local feature importance explainers. Our method not only offers complementary perspectives to existing approaches, but also improves performance on faithfulness metrics (both for comprehensiveness and sufficiency), resulting in more faithful explanations of the system. These results highlight its potential as a valuable tool for model analysis.
Similar Papers
Identifying counterfactual probabilities using bivariate distributions and uplift modeling
Machine Learning (CS)
Finds out if a sale *really* made a customer buy.
Out-of-Distribution Detection using Counterfactual Distance
Machine Learning (CS)
Helps computers know when they see something new.
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Machine Learning (Stat)
Explains why computer decisions change outcomes.