Score: 0

Argumentative Debates for Transparent Bias Detection [Technical Report]

Published: August 6, 2025 | arXiv ID: 2508.04511v1

By: Hamed Ayoobi , Nico Potyka , Anna Rapberger and more

Potential Business Impact:

Finds unfairness in AI by explaining its reasoning.

Plain English Summary

Imagine AI making decisions about things like loan applications or job interviews. This new method helps make sure those decisions are fair to everyone, no matter their background. It works by letting the AI explain *why* it made a certain decision, allowing us to spot and fix any unfairness. This means AI can be a more trustworthy tool that helps, rather than harms, different groups of people.

As the use of AI systems in society grows, addressing potential biases that emerge from data or are learned by models is essential to prevent systematic disadvantages against specific groups. Several notions of (un)fairness have been proposed in the literature, alongside corresponding algorithmic methods for detecting and mitigating unfairness, but, with very few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. In this paper, we contribute a novel interpretable, explainable method for bias detection relying on debates about the presence of bias against individuals, based on the values of protected features for the individuals and others in their neighbourhoods. Our method builds upon techniques from formal and computational argumentation, whereby debates result from arguing about biases within and across neighbourhoods. We provide formal, quantitative, and qualitative evaluations of our method, highlighting its strengths in performance against baselines, as well as its interpretability and explainability.

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence