Score: 2

Testing Fairness with Utility Tradeoffs: A Wasserstein Projection Approach

Published: May 16, 2025 | arXiv ID: 2505.11678v3

By: Yan Chen , Zheng Tan , Jose Blanchet and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Tests if AI is fair without losing too much usefulness.

Business Areas:
A/B Testing Data and Analytics

Ensuring fairness in data driven decision making has become a central concern across domains such as marketing, lending, and healthcare, but fairness constraints often come at the cost of utility. We propose a statistical hypothesis testing framework that jointly evaluates approximate fairness and utility, relaxing strict fairness requirements while ensuring that overall utility remains above a specified threshold. Our framework builds on the strong demographic parity (SDP) criterion and incorporates a utility measure motivated by the potential outcomes framework. The test statistic is constructed via Wasserstein projections, enabling auditors to assess whether observed fairness-utility tradeoffs are intrinsic to the algorithm or attributable to randomness in the data. We show that the test is computationally tractable, interpretable, broadly applicable across machine learning models, and extendable to more general settings. We apply our approach to multiple real-world datasets, offering new insights into the fairness-utility tradeoff through the perspective of statistical hypothesis testing.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡ΈπŸ‡¬ United States, Singapore

Page Count
40 pages

Category
Computer Science:
Computers and Society