Back to the Baseline: Examining Baseline Effects on Explainability Metrics
By: Agustin Martin Picard , Thibaut Boissin , Varshini Subhash and more
Potential Business Impact:
Makes AI explanations fairer by fixing a hidden bias.
Attribution methods are among the most prevalent techniques in Explainable Artificial Intelligence (XAI) and are usually evaluated and compared using Fidelity metrics, with Insertion and Deletion being the most popular. These metrics rely on a baseline function to alter the pixels of the input image that the attribution map deems most important. In this work, we highlight a critical problem with these metrics: the choice of a given baseline will inevitably favour certain attribution methods over others. More concerningly, even a simple linear model with commonly used baselines contradicts itself by designating different optimal methods. A question then arises: which baseline should we use? We propose to study this problem through two desirable properties of a baseline: (i) that it removes information and (ii) that it does not produce overly out-of-distribution (OOD) images. We first show that none of the tested baselines satisfy both criteria, and there appears to be a trade-off among current baselines: either they remove information or they produce a sequence of OOD images. Finally, we introduce a novel baseline by leveraging recent work in feature visualisation to artificially produce a model-dependent baseline that removes information without being overly OOD, thus improving on the trade-off when compared to other existing baselines. Our code is available at https://github.com/deel-ai-papers/Back-to-the-Baseline
Similar Papers
Guidelines For The Choice Of The Baseline in XAI Attribution Methods
Artificial Intelligence
Helps AI explain its decisions to people.
Integrated Influence: Data Attribution with Baseline
Machine Learning (CS)
Shows which training data taught the AI best.
On the notion of missingness for path attribution explainability methods in medical settings: Guiding the selection of medically meaningful baselines
Machine Learning (CS)
Helps doctors trust AI by explaining its medical guesses.