Guidelines For The Choice Of The Baseline in XAI Attribution Methods
By: Cristian Morasso , Giorgio Dolci , Ilaria Boscolo Galazzo and more
Potential Business Impact:
Helps AI explain its decisions to people.
Given the broad adoption of artificial intelligence, it is essential to provide evidence that AI models are reliable, trustable, and fair. To this end, the emerging field of eXplainable AI develops techniques to probe such requirements, counterbalancing the hype pushing the pervasiveness of this technology. Among the many facets of this issue, this paper focuses on baseline attribution methods, aiming at deriving a feature attribution map at the network input relying on a "neutral" stimulus usually called "baseline". The choice of the baseline is crucial as it determines the explanation of the network behavior. In this framework, this paper has the twofold goal of shedding light on the implications of the choice of the baseline and providing a simple yet effective method for identifying the best baseline for the task. To achieve this, we propose a decision boundary sampling method, since the baseline, by definition, lies on the decision boundary, which naturally becomes the search domain. Experiments are performed on synthetic examples and validated relying on state-of-the-art methods. Despite being limited to the experimental scope, this contribution is relevant as it offers clear guidelines and a simple proxy for baseline selection, reducing ambiguity and enhancing deep models' reliability and trust.
Similar Papers
Back to the Baseline: Examining Baseline Effects on Explainability Metrics
Artificial Intelligence
Makes AI explanations fairer by fixing a hidden bias.
Integrated Influence: Data Attribution with Baseline
Machine Learning (CS)
Shows which training data taught the AI best.
On the notion of missingness for path attribution explainability methods in medical settings: Guiding the selection of medically meaningful baselines
Machine Learning (CS)
Helps doctors trust AI by explaining its medical guesses.