Explanation Bias is a Product: Revealing the Hidden Lexical and Position Preferences in Post-Hoc Feature Attribution
By: Jonathan Kamp, Roos Bakker, Dominique Blok
Potential Business Impact:
Explains how AI understands words, showing its hidden biases.
Good quality explanations strengthen the understanding of language models and data. Feature attribution methods, such as Integrated Gradient, are a type of post-hoc explainer that can provide token-level insights. However, explanations on the same input may vary greatly due to underlying biases of different methods. Users may be aware of this issue and mistrust their utility, while unaware users may trust them inadequately. In this work, we delve beyond the superficial inconsistencies between attribution methods, structuring their biases through a model- and method-agnostic framework of three evaluation metrics. We systematically assess both the lexical and position bias (what and where in the input) for two transformers; first, in a controlled, pseudo-random classification task on artificial data; then, in a semi-controlled causal relation detection task on natural data. We find that lexical and position biases are structurally unbalanced in our model comparison, with models that score high on one type score low on the other. We also find signs that methods producing anomalous explanations are more likely to be biased themselves.
Similar Papers
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Computation and Language
Makes AI explain itself fairly to everyone.
What Triggers my Model? Contrastive Explanations Inform Gender Choices by Translation Models
Computation and Language
Finds why computer translations get gender wrong.
Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration
Artificial Intelligence
Finds unfairness in AI, making it more just.