Score: 0

On Explaining Proxy Discrimination and Unfairness in Individual Decisions Made by AI Systems

Published: September 30, 2025 | arXiv ID: 2509.25662v1

By: Belona Sonna, Alban Grastien

Potential Business Impact:

Finds unfairness hidden in computer decisions.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Artificial intelligence (AI) systems in high-stakes domains raise concerns about proxy discrimination, unfairness, and explainability. Existing audits often fail to reveal why unfairness arises, particularly when rooted in structural bias. We propose a novel framework using formal abductive explanations to explain proxy discrimination in individual AI decisions. Leveraging background knowledge, our method identifies which features act as unjustified proxies for protected attributes, revealing hidden structural biases. Central to our approach is the concept of aptitude, a task-relevant property independent of group membership, with a mapping function aligning individuals of equivalent aptitude across groups to assess fairness substantively. As a proof of concept, we showcase the framework with examples taken from the German credit dataset, demonstrating its applicability in real-world cases.

Page Count
14 pages

Category
Computer Science:
Artificial Intelligence