Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
By: Zhenpeng Chen , Xinyue Li , Jie M. Zhang and more
Potential Business Impact:
Makes AI fairer without hurting anyone's performance.
Fairness is a critical requirement for Machine Learning (ML) software, driving the development of numerous bias mitigation methods. Previous research has identified a leveling-down effect in bias mitigation for computer vision and natural language processing tasks, where fairness is achieved by lowering performance for all groups without benefiting the unprivileged group. However, it remains unclear whether this effect applies to bias mitigation for tabular data tasks, a key area in fairness research with significant real-world applications. This study evaluates eight bias mitigation methods for tabular data, including both widely used and cutting-edge approaches, across 44 tasks using five real-world datasets and four common ML models. Contrary to earlier findings, our results show that these methods operate in a zero-sum fashion, where improvements for unprivileged groups are related to reduced benefits for traditionally privileged groups. However, previous research indicates that the perception of a zero-sum trade-off might complicate the broader adoption of fairness policies. To explore alternatives, we investigate an approach that applies the state-of-the-art bias mitigation method solely to unprivileged groups, showing potential to enhance benefits of unprivileged groups without negatively affecting privileged groups or overall ML performance. Our study highlights potential pathways for achieving fairness improvements without zero-sum trade-offs, which could help advance the adoption of bias mitigation methods.
Similar Papers
Fairness for the People, by the People: Minority Collective Action
Machine Learning (CS)
Helps minority groups fix unfair computer decisions.
The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models
Machine Learning (CS)
Makes AI fair for everyone, not just some.
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Helps judges make fairer decisions about people.