Argumentative Debates for Transparent Bias Detection [Technical Report]
By: Hamed Ayoobi , Nico Potyka , Anna Rapberger and more
Potential Business Impact:
Finds unfairness in AI by explaining its reasoning.
As the use of AI systems in society grows, addressing potential biases that emerge from data or are learned by models is essential to prevent systematic disadvantages against specific groups. Several notions of (un)fairness have been proposed in the literature, alongside corresponding algorithmic methods for detecting and mitigating unfairness, but, with very few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. In this paper, we contribute a novel interpretable, explainable method for bias detection relying on debates about the presence of bias against individuals, based on the values of protected features for the individuals and others in their neighbourhoods. Our method builds upon techniques from formal and computational argumentation, whereby debates result from arguing about biases within and across neighbourhoods. We provide formal, quantitative, and qualitative evaluations of our method, highlighting its strengths in performance against baselines, as well as its interpretability and explainability.
Similar Papers
On Explaining Proxy Discrimination and Unfairness in Individual Decisions Made by AI Systems
Artificial Intelligence
Finds unfairness hidden in computer decisions.
Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration
Artificial Intelligence
Finds unfairness in AI, making it more just.
The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models
Machine Learning (CS)
Makes AI fair for everyone, not just some.