Evaluating Differentially Private Generation of Domain-Specific Text
By: Yidan Sun , Viktor Schlegel , Srinivasan Nandakumar and more
Potential Business Impact:
Creates fake data that keeps real secrets safe.
Generative AI offers transformative potential for high-stakes domains such as healthcare and finance, yet privacy and regulatory barriers hinder the use of real-world data. To address this, differentially private synthetic data generation has emerged as a promising alternative. In this work, we introduce a unified benchmark to systematically evaluate the utility and fidelity of text datasets generated under formal Differential Privacy (DP) guarantees. Our benchmark addresses key challenges in domain-specific benchmarking, including choice of representative data and realistic privacy budgets, accounting for pre-training and a variety of evaluation metrics. We assess state-of-the-art privacy-preserving generation methods across five domain-specific datasets, revealing significant utility and fidelity degradation compared to real data, especially under strict privacy constraints. These findings underscore the limitations of current approaches, outline the need for advanced privacy-preserving data sharing methods and set a precedent regarding their evaluation in realistic scenarios.
Similar Papers
SynBench: A Benchmark for Differentially Private Text Generation
Artificial Intelligence
Makes AI safe for private health and money data.
Differentially-private text generation degrades output language quality
Computation and Language
Makes private AI talk less, worse, and less useful.
How to DP-fy Your Data: A Practical Guide to Generating Synthetic Data With Differential Privacy
Cryptography and Security
Creates fake data that protects real people's secrets.