Differentially-private text generation degrades output language quality
By: Erion Çano, Ivan Habernal
Potential Business Impact:
Makes private AI talk less, worse, and less useful.
Ensuring user privacy by synthesizing data from large language models (LLMs) tuned under differential privacy (DP) has become popular recently. However, the impact of DP fine-tuned LLMs on the quality of the language and the utility of the texts they produce has not been investigated. In this work, we tune five LLMs with three corpora under four levels of privacy and assess the length, the grammatical correctness, and the lexical diversity of the text outputs they produce. We also probe the utility of the synthetic outputs in downstream classification tasks such as book genre recognition based on book descriptions and cause of death recognition based on verbal autopsies. The results indicate that LLMs tuned under stronger privacy constrains produce texts that are shorter by at least 77 %, that are less grammatically correct by at least 9 %, and are less diverse by at least 10 % in bi-gram diversity. Furthermore, the accuracy they reach in downstream classification tasks decreases, which might be detrimental to the usefulness of the generated synthetic data.
Similar Papers
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Cryptography and Security
Keeps private info safe when AI learns new things.
Evaluating Differentially Private Generation of Domain-Specific Text
Machine Learning (CS)
Creates fake data that keeps real secrets safe.
SynBench: A Benchmark for Differentially Private Text Generation
Artificial Intelligence
Makes AI safe for private health and money data.