Actionable Counterfactual Explanations Using Bayesian Networks and Path Planning with Applications to Environmental Quality Improvement
By: Enrique Valero-Leal, Pedro Larrañaga, Concha Bielza
Potential Business Impact:
Helps computers explain decisions fairly and privately.
Counterfactual explanations study what should have changed in order to get an alternative result, enabling end-users to understand machine learning mechanisms with counterexamples. Actionability is defined as the ability to transform the original case to be explained into a counterfactual one. We develop a method for actionable counterfactual explanations that, unlike predecessors, does not directly leverage training data. Rather, data is only used to learn a density estimator, creating a search landscape in which to apply path planning algorithms to solve the problem and masking the endogenous data, which can be sensitive or private. We put special focus on estimating the data density using Bayesian networks, demonstrating how their enhanced interpretability is useful in high-stakes scenarios in which fairness is raising concern. Using a synthetic benchmark comprised of 15 datasets, our proposal finds more actionable and simpler counterfactuals than the current state-of-the-art algorithms. We also test our algorithm with a real-world Environmental Protection Agency dataset, facilitating a more efficient and equitable study of policies to improve the quality of life in United States of America counties. Our proposal captures the interaction of variables, ensuring equity in decisions, as policies to improve certain domains of study (air, water quality, etc.) can be detrimental in others. In particular, the sociodemographic domain is often involved, where we find important variables related to the ongoing housing crisis that can potentially have a severe negative impact on communities.
Similar Papers
Actionable and diverse counterfactual explanations incorporating domain knowledge and causal constraints
Artificial Intelligence
Makes AI suggestions practical and believable.
P2C: Path to Counterfactuals
Artificial Intelligence
Shows how to fix bad computer decisions step-by-step.
From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments
Artificial Intelligence
Helps smart homes explain why things happened.