Score: 0

Intersectional Fairness via Mixed-Integer Optimization

Published: January 27, 2026 | arXiv ID: 2601.19595v1

By: Jiří Němeček , Mark Kozdoba , Illia Kryvoviaz and more

Potential Business Impact:

Makes AI fair for everyone, not just groups.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

The deployment of Artificial Intelligence in high-risk domains, such as finance and healthcare, necessitates models that are both fair and transparent. While regulatory frameworks, including the EU's AI Act, mandate bias mitigation, they are deliberately vague about the definition of bias. In line with existing research, we argue that true fairness requires addressing bias at the intersections of protected groups. We propose a unified framework that leverages Mixed-Integer Optimization (MIO) to train intersectionally fair and intrinsically interpretable classifiers. We prove the equivalence of two measures of intersectional fairness (MSD and SPSF) in detecting the most unfair subgroup and empirically demonstrate that our MIO-based algorithm improves performance in finding bias. We train high-performing, interpretable classifiers that bound intersectional bias below an acceptable threshold, offering a robust solution for regulated industries and beyond.

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)