Size-adaptive Hypothesis Testing for Fairness
By: Antonio Ferrara , Francesco Cozzi , Alan Perotti and more
Potential Business Impact:
Checks if computer programs are unfair to groups.
Determining whether an algorithmic decision-making system discriminates against a specific demographic typically involves comparing a single point estimate of a fairness metric against a predefined threshold. This practice is statistically brittle: it ignores sampling error and treats small demographic subgroups the same as large ones. The problem intensifies in intersectional analyses, where multiple sensitive attributes are considered jointly, giving rise to a larger number of smaller groups. As these groups become more granular, the data representing them becomes too sparse for reliable estimation, and fairness metrics yield excessively wide confidence intervals, precluding meaningful conclusions about potential unfair treatments. In this paper, we introduce a unified, size-adaptive, hypothesis-testing framework that turns fairness assessment into an evidence-based statistical decision. Our contribution is twofold. (i) For sufficiently large subgroups, we prove a Central-Limit result for the statistical parity difference, leading to analytic confidence intervals and a Wald test whose type-I (false positive) error is guaranteed at level $\alpha$. (ii) For the long tail of small intersectional groups, we derive a fully Bayesian Dirichlet-multinomial estimator; Monte-Carlo credible intervals are calibrated for any sample size and naturally converge to Wald intervals as more data becomes available. We validate our approach empirically on benchmark datasets, demonstrating how our tests provide interpretable, statistically rigorous decisions under varying degrees of data availability and intersectionality.
Similar Papers
Testing Fairness with Utility Tradeoffs: A Wasserstein Projection Approach
Computers and Society
Tests if AI is fair without losing too much usefulness.
Quantifying Query Fairness Under Unawareness
Information Retrieval
Makes search results fair for everyone.
Quantifying Group Fairness in Community Detection
Social and Information Networks
Finds unfairness in group networks, helps fix it.