Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
By: Mahdi Dhaini , Ege Erdogan , Nils Feldhus and more
Potential Business Impact:
Makes AI explain itself fairly to everyone.
While research on applications and evaluations of explanation methods continues to expand, fairness of the explanation methods concerning disparities in their performance across subgroups remains an often overlooked aspect. In this paper, we address this gap by showing that, across three tasks and five language models, widely used post-hoc feature attribution methods exhibit significant gender disparity with respect to their faithfulness, robustness, and complexity. These disparities persist even when the models are pre-trained or fine-tuned on particularly unbiased datasets, indicating that the disparities we observe are not merely consequences of biased training data. Our results highlight the importance of addressing disparities in explanations when developing and applying explainability methods, as these can lead to biased outcomes against certain subgroups, with particularly critical implications in high-stakes contexts. Furthermore, our findings underscore the importance of incorporating the fairness of explanations, alongside overall model fairness and explainability, as a requirement in regulatory frameworks.
Similar Papers
Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration
Artificial Intelligence
Finds unfairness in AI, making it more just.
Explainable post-training bias mitigation with distribution-based fairness metrics
Machine Learning (CS)
Makes AI fair without retraining.
How to Probe: Simple Yet Effective Techniques for Improving Post-hoc Explanations
CV and Pattern Recognition
Makes AI explanations more trustworthy and accurate.