Fairness Meets Privacy: Integrating Differential Privacy and Demographic Parity in Multi-class Classification
By: Lilian Say, Christophe Denis, Rafael Pinot
Potential Business Impact:
Keeps private data safe while being fair.
The increasing use of machine learning in sensitive applications demands algorithms that simultaneously preserve data privacy and ensure fairness across potentially sensitive sub-populations. While privacy and fairness have each been extensively studied, their joint treatment remains poorly understood. Existing research often frames them as conflicting objectives, with multiple studies suggesting that strong privacy notions such as differential privacy inevitably compromise fairness. In this work, we challenge that perspective by showing that differential privacy can be integrated into a fairness-enhancing pipeline with minimal impact on fairness guarantees. We design a postprocessing algorithm, called DP2DP, that enforces both demographic parity and differential privacy. Our analysis reveals that our algorithm converges towards its demographic parity objective at essentially the same rate (up logarithmic factor) as the best non-private methods from the literature. Experiments on both synthetic and real datasets confirm our theoretical results, showing that the proposed algorithm achieves state-of-the-art accuracy/fairness/privacy trade-offs.
Similar Papers
On the Fairness of Privacy Protection: Measuring and Mitigating the Disparity of Group Privacy Risks for Differentially Private Machine Learning
Machine Learning (CS)
Protects everyone's data equally, not just some.
Optimal Fairness under Local Differential Privacy
Machine Learning (CS)
Makes private data fair for computers.
Differential Privacy for Deep Learning in Medicine
Machine Learning (CS)
Keeps patient data safe while training AI.