Back to the Feature: Explaining Video Classifiers with Video Counterfactual Explanations
By: Chao Wang , Chengan Che , Xinyue Chen and more
Potential Business Impact:
Helps videos show why a computer made a choice.
Counterfactual explanations (CFEs) are minimal and semantically meaningful modifications of the input of a model that alter the model predictions. They highlight the decisive features the model relies on, providing contrastive interpretations for classifiers. State-of-the-art visual counterfactual explanation methods are designed to explain image classifiers. The generation of CFEs for video classifiers remains largely underexplored. For the counterfactual videos to be useful, they have to be physically plausible, temporally coherent, and exhibit smooth motion trajectories. Existing CFE image-based methods, designed to explain image classifiers, lack the capacity to generate temporally coherent, smooth and physically plausible video CFEs. To address this, we propose Back To The Feature (BTTF), an optimization framework that generates video CFEs. Our method introduces two novel features, 1) an optimization scheme to retrieve the initial latent noise conditioned by the first frame of the input video, 2) a two-stage optimization strategy to enable the search for counterfactual videos in the vicinity of the input video. Both optimization processes are guided solely by the target classifier, ensuring the explanation is faithful. To accelerate convergence, we also introduce a progressive optimization strategy that incrementally increases the number of denoising steps. Extensive experiments on video datasets such as Shape-Moving (motion classification), MEAD (emotion classification), and NTU RGB+D (action classification) show that our BTTF effectively generates valid, visually similar and realistic counterfactual videos that provide concrete insights into the classifier's decision-making mechanism.
Similar Papers
Looking in the mirror: A faithful counterfactual explanation method for interpreting deep image classification models
CV and Pattern Recognition
Shows how to change pictures to fool computers.
ACE: Adapting sampling for Counterfactual Explanations
Machine Learning (CS)
Finds smallest changes to fix computer guesses.
Towards Desiderata-Driven Design of Visual Counterfactual Explainers
Machine Learning (CS)
Shows how to change pictures to fool computers.