Testing Fairness with Utility Tradeoffs: A Wasserstein Projection Approach
By: Yan Chen , Zheng Tan , Jose Blanchet and more
Potential Business Impact:
Tests if AI is fair without losing too much usefulness.
Ensuring fairness in data driven decision making has become a central concern across domains such as marketing, lending, and healthcare, but fairness constraints often come at the cost of utility. We propose a statistical hypothesis testing framework that jointly evaluates approximate fairness and utility, relaxing strict fairness requirements while ensuring that overall utility remains above a specified threshold. Our framework builds on the strong demographic parity (SDP) criterion and incorporates a utility measure motivated by the potential outcomes framework. The test statistic is constructed via Wasserstein projections, enabling auditors to assess whether observed fairness-utility tradeoffs are intrinsic to the algorithm or attributable to randomness in the data. We show that the test is computationally tractable, interpretable, broadly applicable across machine learning models, and extendable to more general settings. We apply our approach to multiple real-world datasets, offering new insights into the fairness-utility tradeoff through the perspective of statistical hypothesis testing.
Similar Papers
Size-adaptive Hypothesis Testing for Fairness
Machine Learning (CS)
Checks if computer programs are unfair to groups.
A Framework for Benchmarking Fairness-Utility Trade-offs in Text-to-Image Models via Pareto Frontiers
CV and Pattern Recognition
Finds better ways to make AI images fair.
A Multi-Objective Evaluation Framework for Analyzing Utility-Fairness Trade-Offs in Machine Learning Systems
Machine Learning (CS)
Helps AI make fair choices, showing pros and cons.