DP-MGTD: Privacy-Preserving Machine-Generated Text Detection via Adaptive Differentially Private Entity Sanitization
By: Lionel Z. Wang , Yusheng Zhao , Jiabin Luo and more
Potential Business Impact:
Finds fake writing without spying on you.
The deployment of Machine-Generated Text (MGT) detection systems necessitates processing sensitive user data, creating a fundamental conflict between authorship verification and privacy preservation. Standard anonymization techniques often disrupt linguistic fluency, while rigorous Differential Privacy (DP) mechanisms typically degrade the statistical signals required for accurate detection. To resolve this dilemma, we propose \textbf{DP-MGTD}, a framework incorporating an Adaptive Differentially Private Entity Sanitization algorithm. Our approach utilizes a two-stage mechanism that performs noisy frequency estimation and dynamically calibrates privacy budgets, applying Laplace and Exponential mechanisms to numerical and textual entities respectively. Crucially, we identify a counter-intuitive phenomenon where the application of DP noise amplifies the distinguishability between human and machine text by exposing distinct sensitivity patterns to perturbation. Extensive experiments on the MGTBench-2.0 dataset show that our method achieves near-perfect detection accuracy, significantly outperforming non-private baselines while satisfying strict privacy guarantees.
Similar Papers
Leveraging Semantic Triples for Private Document Generation with Local Differential Privacy Guarantees
Computation and Language
Keeps writing private, even with less privacy.
Differentially-private text generation degrades output language quality
Computation and Language
Makes private AI talk less, worse, and less useful.
Stress-testing Machine Generated Text Detection: Shifting Language Models Writing Style to Fool Detectors
Computation and Language
Makes AI-written text harder to spot.