Reasonable uncertainty: Confidence intervals in empirical Bayes discrimination detection
By: Jiaying Gu, Nikolaos Ignatiadis, Azeem M. Shaikh
Potential Business Impact:
Finds how much unfairness is really there.
We revisit empirical Bayes discrimination detection, focusing on uncertainty arising from both partial identification and sampling variability. While prior work has mostly focused on partial identification, we find that some empirical findings are not robust to sampling uncertainty. To better connect statistical evidence to the magnitude of real-world discriminatory behavior, we propose a counterfactual odds-ratio estimand with a attractive properties and interpretation. Our analysis reveals the importance of careful attention to uncertainty quantification and downstream goals in empirical Bayes analyses.
Similar Papers
Uncertainty-Aware Strategies: A Model-Agnostic Framework for Robust Financial Optimization through Subsampling
Computational Finance
Helps money decisions be safer with uncertain numbers.
Calibrated and uncertain? Evaluating uncertainty estimates in binary classification models
Machine Learning (CS)
Helps computers know when they are unsure.
Selective and marginal selective inference for exceptional groups
Statistics Theory
Helps scientists pick the best group to study.