Fair Bayesian Data Selection via Generalized Discrepancy Measures
By: Yixuan Zhang , Jiabin Luo , Zhenggang Wang and more
Potential Business Impact:
Makes AI fair by fixing bad training data.
Fairness concerns are increasingly critical as machine learning models are deployed in high-stakes applications. While existing fairness-aware methods typically intervene at the model level, they often suffer from high computational costs, limited scalability, and poor generalization. To address these challenges, we propose a Bayesian data selection framework that ensures fairness by aligning group-specific posterior distributions of model parameters and sample weights with a shared central distribution. Our framework supports flexible alignment via various distributional discrepancy measures, including Wasserstein distance, maximum mean discrepancy, and $f$-divergence, allowing geometry-aware control without imposing explicit fairness constraints. This data-centric approach mitigates group-specific biases in training data and improves fairness in downstream tasks, with theoretical guarantees. Experiments on benchmark datasets show that our method consistently outperforms existing data selection and model-based fairness methods in both fairness and accuracy.
Similar Papers
On the Fairness of Privacy Protection: Measuring and Mitigating the Disparity of Group Privacy Risks for Differentially Private Machine Learning
Machine Learning (CS)
Protects everyone's data equally, not just some.
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Makes AI fairer when judging people.
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Makes AI fairer when judging people.