Score: 0

Optimal Fairness under Local Differential Privacy

Published: November 20, 2025 | arXiv ID: 2511.16377v1

By: Hrad Ghoukasian, Shahab Asoodeh

Potential Business Impact:

Makes private data fair for computers.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

We investigate how to optimally design local differential privacy (LDP) mechanisms that reduce data unfairness and thereby improve fairness in downstream classification. We first derive a closed-form optimal mechanism for binary sensitive attributes and then develop a tractable optimization framework that yields the corresponding optimal mechanism for multi-valued attributes. As a theoretical contribution, we establish that for discrimination-accuracy optimal classifiers, reducing data unfairness necessarily leads to lower classification unfairness, thus providing a direct link between privacy-aware pre-processing and classification fairness. Empirically, we demonstrate that our approach consistently outperforms existing LDP mechanisms in reducing data unfairness across diverse datasets and fairness metrics, while maintaining accuracy close to that of non-private models. Moreover, compared with leading pre-processing and post-processing fairness methods, our mechanism achieves a more favorable accuracy-fairness trade-off while simultaneously preserving the privacy of sensitive attributes. Taken together, these results highlight LDP as a principled and effective pre-processing fairness intervention technique.

Country of Origin
🇨🇦 Canada

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)