Fair for a few: Improving Fairness in Doubly Imbalanced Datasets
By: Ata Yalcin , Asli Umay Ozturk , Yigit Sever and more
Potential Business Impact:
Makes AI fair even with tricky, uneven data.
Fairness has been identified as an important aspect of Machine Learning and Artificial Intelligence solutions for decision making. Recent literature offers a variety of approaches for debiasing, however many of them fall short when the data collection is imbalanced. In this paper, we focus on a particular case, fairness in doubly imbalanced datasets, such that the data collection is imbalanced both for the label and the groups in the sensitive attribute. Firstly, we present an exploratory analysis to illustrate limitations in debiasing on a doubly imbalanced dataset. Then, a multi-criteria based solution is proposed for finding the most suitable sampling and distribution for label and sensitive attribute, in terms of fairness and classification accuracy
Similar Papers
Bayes-Optimal Fair Classification with Multiple Sensitive Features
Machine Learning (Stat)
Makes AI fair for everyone, no matter what.
Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
Machine Learning (CS)
Makes AI fairer without hurting anyone's performance.
One Size Fits None: Rethinking Fairness in Medical AI
Machine Learning (CS)
Checks if AI doctors treat everyone fairly.