From Visual Explanations to Counterfactual Explanations with Latent Diffusion
By: Tung Luu , Nam Le , Duc Le and more
Potential Business Impact:
Shows why computers make wrong picture guesses.
Visual counterfactual explanations are ideal hypothetical images that change the decision-making of the classifier with high confidence toward the desired class while remaining visually plausible and close to the initial image. In this paper, we propose a new approach to tackle two key challenges in recent prominent works: i) determining which specific counterfactual features are crucial for distinguishing the "concept" of the target class from the original class, and ii) supplying valuable explanations for the non-robust classifier without relying on the support of an adversarially robust model. Our method identifies the essential region for modification through algorithms that provide visual explanations, and then our framework generates realistic counterfactual explanations by combining adversarial attacks based on pruning the adversarial gradient of the target classifier and the latent diffusion model. The proposed method outperforms previous state-of-the-art results on various evaluation criteria on ImageNet and CelebA-HQ datasets. In general, our method can be applied to arbitrary classifiers, highlight the strong association between visual and counterfactual explanations, make semantically meaningful changes from the target classifier, and provide observers with subtle counterfactual images.
Similar Papers
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Machine Learning (CS)
Shows why computers see what they see.
Diffusion Counterfactuals for Image Regressors
Machine Learning (CS)
Shows how to change pictures to get different results.
DocVCE: Diffusion-based Visual Counterfactual Explanations for Document Image Classification
CV and Pattern Recognition
Shows why computers decide documents are what they are.