ACE: Adapting sampling for Counterfactual Explanations
By: Margarita A. Guerrero, Cristian R. Rojas
Potential Business Impact:
Finds smallest changes to fix computer guesses.
Counterfactual Explanations (CFEs) interpret machine learning models by identifying the smallest change to input features needed to change the model's prediction to a desired output. For classification tasks, CFEs determine how close a given sample is to the decision boundary of a trained classifier. Existing methods are often sample-inefficient, requiring numerous evaluations of a black-box model -- an approach that is both costly and impractical when access to the model is limited. We propose Adaptive sampling for Counterfactual Explanations (ACE), a sample-efficient algorithm combining Bayesian estimation and stochastic optimization to approximate the decision boundary with fewer queries. By prioritizing informative points, ACE minimizes evaluations while generating accurate and feasible CFEs. Extensive empirical results show that ACE achieves superior evaluation efficiency compared to state-of-the-art methods, while maintaining effectiveness in identifying minimal and actionable changes.
Similar Papers
Counterfactual Scenarios for Automated Planning
Artificial Intelligence
Changes problems to get better results.
Back to the Feature: Explaining Video Classifiers with Video Counterfactual Explanations
CV and Pattern Recognition
Helps videos show why a computer made a choice.
Looking in the mirror: A faithful counterfactual explanation method for interpreting deep image classification models
CV and Pattern Recognition
Shows how to change pictures to fool computers.