FairRF: Multi-Objective Search for Single and Intersectional Software Fairness
By: Giordano d'Alosio , Max Hort , Rebecca Moussa and more
Potential Business Impact:
Makes AI fair and accurate for everyone.
Background: The wide adoption of AI- and ML-based systems in sensitive domains raises severe concerns about their fairness. Many methods have been proposed in the literature to enhance software fairness. However, the majority behave as a black-box, not allowing stakeholders to prioritise fairness or effectiveness (i.e., prediction correctness) based on their needs. Aims: In this paper, we introduce FairRF, a novel approach based on multi-objective evolutionary search to optimise fairness and effectiveness in classification tasks. FairRF uses a Random Forest (RF) model as a base classifier and searches for the best hyperparameter configurations and data mutation to maximise fairness and effectiveness. Eventually, it returns a set of Pareto optimal solutions, allowing the final stakeholders to choose the best one based on their needs. Method: We conduct an extensive empirical evaluation of FairRF against 26 different baselines in 11 different scenarios using five effectiveness and three fairness metrics. Additionally, we also include two variations of the fairness metrics for intersectional bias for a total of six definitions analysed. Result: Our results show that FairRF can significantly improve the fairness of base classifiers, while maintaining consistent prediction effectiveness. Additionally, FairRF provides a more consistent optimisation under all fairness definitions compared to state-of-the-art bias mitigation methods and overcomes the existing state-of-the-art approach for intersectional bias mitigation. Conclusions: FairRF is an effective approach for bias mitigation also allowing stakeholders to adapt the development of fair software systems based on their specific needs.
Similar Papers
Fairness-Aware Insurance Pricing: A Multi-Objective Optimization Approach
Risk Management
Makes insurance fairer for everyone, not just some.
APFEx: Adaptive Pareto Front Explorer for Intersectional Fairness
Machine Learning (CS)
Makes computer decisions fairer for everyone.
On the Robustness of Fairness Practices: A Causal Framework for Systematic Evaluation
Software Engineering
Makes computer decisions fair for everyone.