Classifier Reconstruction Through Counterfactual-Aware Wasserstein Prototypes
By: Xuan Zhao , Zhuo Cao , Arya Bangun and more
Potential Business Impact:
Makes AI learn better with less data.
Counterfactual explanations provide actionable insights by identifying minimal input changes required to achieve a desired model prediction. Beyond their interpretability benefits, counterfactuals can also be leveraged for model reconstruction, where a surrogate model is trained to replicate the behavior of a target model. In this work, we demonstrate that model reconstruction can be significantly improved by recognizing that counterfactuals, which typically lie close to the decision boundary, can serve as informative though less representative samples for both classes. This is particularly beneficial in settings with limited access to labeled data. We propose a method that integrates original data samples with counterfactuals to approximate class prototypes using the Wasserstein barycenter, thereby preserving the underlying distributional structure of each class. This approach enhances the quality of the surrogate model and mitigates the issue of decision boundary shift, which commonly arises when counterfactuals are naively treated as ordinary training instances. Empirical results across multiple datasets show that our method improves fidelity between the surrogate and target models, validating its effectiveness.
Similar Papers
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Machine Learning (CS)
Shows why computers see what they see.
Graph Diffusion Counterfactual Explanation
Machine Learning (CS)
Helps AI explain why it makes graph decisions.
Mitigating Clever Hans Strategies in Image Classifiers through Generating Counterexamples
Machine Learning (CS)
Teaches computers to learn better, not just guess.