A Comprehensive Evaluation of the Sensitivity of Density-Ratio Estimation Based Fairness Measurement in Regression
By: Abdalwahab Almajed, Maryam Tabar, Peyman Najafirad
Potential Business Impact:
Fixes unfair computer decisions in predictions.
The prevalence of algorithmic bias in Machine Learning (ML)-driven approaches has inspired growing research on measuring and mitigating bias in the ML domain. Accordingly, prior research studied how to measure fairness in regression which is a complex problem. In particular, recent research proposed to formulate it as a density-ratio estimation problem and relied on a Logistic Regression-driven probabilistic classifier-based approach to solve it. However, there are several other methods to estimate a density ratio, and to the best of our knowledge, prior work did not study the sensitivity of such fairness measurement methods to the choice of underlying density ratio estimation algorithm. To fill this gap, this paper develops a set of fairness measurement methods with various density-ratio estimation cores and thoroughly investigates how different cores would affect the achieved level of fairness. Our experimental results show that the choice of density-ratio estimation core could significantly affect the outcome of fairness measurement method, and even, generate inconsistent results with respect to the relative fairness of various algorithms. These observations suggest major issues with density-ratio estimation based fairness measurement in regression and a need for further research to enhance their reliability.
Similar Papers
Estimating Unbounded Density Ratios: Applications in Error Control under Covariate Shift
Machine Learning (Stat)
Makes computer learning better with tricky data.
Toward Unifying Group Fairness Evaluation from a Sparsity Perspective
Machine Learning (CS)
Makes computer decisions fairer for everyone.
Uncovering Fairness through Data Complexity as an Early Indicator
Machine Learning (CS)
Finds hidden unfairness in computer decisions.