Measures of classification bias derived from sample size analysis
By: Ioannis Ivrissimtzis , Shauna Concannon , Matthew Houliston and more
Potential Business Impact:
Finds unfair computer decisions faster.
We propose the use of a simple intuitive principle for measuring algorithmic classification bias: the significance of the differences in a classifier's error rates across the various demographics is inversely commensurate with the sample size required to statistically detect them. That is, if large sample sizes are required to statistically establish biased behavior, the algorithm is less biased, and vice versa. In a simple setting, we assume two distinct demographics, and non-parametric estimates of the error rates on them, e1 and e2, respectively. We use a well-known approximate formula for the sample size of the chi-squared test, and verify some basic desirable properties of the proposed measure. Next, we compare the proposed measure with two other commonly used statistics, the difference e2-e1 and the ratio e2/e1 of the error rates. We establish that the proposed measure is essentially different in that it can rank algorithms for bias differently, and we discuss some of its advantages over the other two measures. Finally, we briefly discuss how some of the desirable properties of the proposed measure emanate from fundamental characteristics of the method, rather than the approximate sample size formula we used, and thus, are expected to hold in more complex settings with more than two demographics.
Similar Papers
Algorithmic Accountability in Small Data: Sample-Size-Induced Bias Within Classification Metrics
Machine Learning (CS)
Fixes unfair computer decisions when groups are different sizes.
Getting it right: Methods for risk ratios and risk differences cluster randomized trials with a small number of clusters
Methodology
Fixes math for small medical studies.
On the Origins of Sampling Bias: Implications on Fairness Measurement and Mitigation
Machine Learning (CS)
Fixes unfairness in computer learning.