On the Origins of Sampling Bias: Implications on Fairness Measurement and Mitigation
By: Sami Zhioua , Ruta Binkyte , Ayoub Ouni and more
Potential Business Impact:
Fixes unfairness in computer learning.
Accurately measuring discrimination is crucial to faithfully assessing fairness of trained machine learning (ML) models. Any bias in measuring discrimination leads to either amplification or underestimation of the existing disparity. Several sources of bias exist and it is assumed that bias resulting from machine learning is born equally by different groups (e.g. females vs males, whites vs blacks, etc.). If, however, bias is born differently by different groups, it may exacerbate discrimination against specific sub-populations. Sampling bias, in particular, is inconsistently used in the literature to describe bias due to the sampling procedure. In this paper, we attempt to disambiguate this term by introducing clearly defined variants of sampling bias, namely, sample size bias (SSB) and underrepresentation bias (URB). Through an extensive set of experiments on benchmark datasets and using mainstream learning algorithms, we expose relevant observations in several model training scenarios. The observations are finally framed as actionable recommendations for practitioners.
Similar Papers
Algorithmic Accountability in Small Data: Sample-Size-Induced Bias Within Classification Metrics
Machine Learning (CS)
Fixes unfair computer decisions when groups are different sizes.
Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
Machine Learning (CS)
Makes AI fairer without hurting anyone's performance.
Active Data Sampling and Generation for Bias Remediation
Machine Learning (CS)
Fixes unfair computer guesses using smart data.