Evolved SampleWeights for Bias Mitigation: Effectiveness Depends on Optimization Objectives
By: Anil K. Saini , Jose Guadalupe Hernandez , Emily F. Wong and more
Potential Business Impact:
Fixes unfair computer guesses by changing how data is used.
Machine learning models trained on real-world data may inadvertently make biased predictions that negatively impact marginalized communities. Reweighting is a method that can mitigate such bias in model predictions by assigning a weight to each data point used during model training. In this paper, we compare three methods for generating these weights: (1) evolving them using a Genetic Algorithm (GA), (2) computing them using only dataset characteristics, and (3) assigning equal weights to all data points. Model performance under each strategy was evaluated using paired predictive and fairness metrics, which also served as optimization objectives for the GA during evolution. Specifically, we used two predictive metrics (accuracy and area under the Receiver Operating Characteristic curve) and two fairness metrics (demographic parity difference and subgroup false negative fairness). Using experiments on eleven publicly available datasets (including two medical datasets), we show that evolved sample weights can produce models that achieve better trade-offs between fairness and predictive performance than alternative weighting methods. However, the magnitude of these benefits depends strongly on the choice of optimization objectives. Our experiments reveal that optimizing with accuracy and demographic parity difference metrics yields the largest number of datasets for which evolved weights are significantly better than other weighting strategies in optimizing both objectives.
Similar Papers
IFFair: Influence Function-driven Sample Reweighting for Fair Classification
Machine Learning (CS)
Fixes unfair computer decisions by changing how data is used.
Demystifying Diffusion Objectives: Reweighted Losses are Better Variational Bounds
Machine Learning (CS)
Makes AI create better pictures from less information.
SWiFT: Soft-Mask Weight Fine-tuning for Bias Mitigation
Machine Learning (CS)
Fixes computer mistakes that hurt people.