Optimal Fairness under Local Differential Privacy
By: Hrad Ghoukasian, Shahab Asoodeh
Potential Business Impact:
Makes private data fair for computers.
We investigate how to optimally design local differential privacy (LDP) mechanisms that reduce data unfairness and thereby improve fairness in downstream classification. We first derive a closed-form optimal mechanism for binary sensitive attributes and then develop a tractable optimization framework that yields the corresponding optimal mechanism for multi-valued attributes. As a theoretical contribution, we establish that for discrimination-accuracy optimal classifiers, reducing data unfairness necessarily leads to lower classification unfairness, thus providing a direct link between privacy-aware pre-processing and classification fairness. Empirically, we demonstrate that our approach consistently outperforms existing LDP mechanisms in reducing data unfairness across diverse datasets and fairness metrics, while maintaining accuracy close to that of non-private models. Moreover, compared with leading pre-processing and post-processing fairness methods, our mechanism achieves a more favorable accuracy-fairness trade-off while simultaneously preserving the privacy of sensitive attributes. Taken together, these results highlight LDP as a principled and effective pre-processing fairness intervention technique.
Similar Papers
Fairness Meets Privacy: Integrating Differential Privacy and Demographic Parity in Multi-class Classification
Machine Learning (Stat)
Keeps private data safe while being fair.
High-Probability Bounds For Heterogeneous Local Differential Privacy
Machine Learning (Stat)
Protects your private info while still getting useful data.
On the Fairness of Privacy Protection: Measuring and Mitigating the Disparity of Group Privacy Risks for Differentially Private Machine Learning
Machine Learning (CS)
Protects everyone's data equally, not just some.