Intersectional Fairness via Mixed-Integer Optimization
By: Jiří Němeček , Mark Kozdoba , Illia Kryvoviaz and more
Potential Business Impact:
Makes AI fair for everyone, not just groups.
The deployment of Artificial Intelligence in high-risk domains, such as finance and healthcare, necessitates models that are both fair and transparent. While regulatory frameworks, including the EU's AI Act, mandate bias mitigation, they are deliberately vague about the definition of bias. In line with existing research, we argue that true fairness requires addressing bias at the intersections of protected groups. We propose a unified framework that leverages Mixed-Integer Optimization (MIO) to train intersectionally fair and intrinsically interpretable classifiers. We prove the equivalence of two measures of intersectional fairness (MSD and SPSF) in detecting the most unfair subgroup and empirically demonstrate that our MIO-based algorithm improves performance in finding bias. We train high-performing, interpretable classifiers that bound intersectional bias below an acceptable threshold, offering a robust solution for regulated industries and beyond.
Similar Papers
A Unifying Human-Centered AI Fairness Framework
Machine Learning (CS)
Helps AI treat everyone fairly, no matter what.
MMM-fair: An Interactive Toolkit for Exploring and Operationalizing Multi-Fairness Trade-offs
Machine Learning (CS)
Makes AI fairer by finding hidden unfairness.
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Machine Learning (CS)
Makes AI fair for everyone, not just groups.