Score: 1

Empirical Calibration and Metric Differential Privacy in Language Models

Published: March 18, 2025 | arXiv ID: 2503.13872v1

By: Pedro Faustini , Natasha Fernandes , Annabelle McIver and more

Potential Business Impact:

Protects private text data better in AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

NLP models trained with differential privacy (DP) usually adopt the DP-SGD framework, and privacy guarantees are often reported in terms of the privacy budget $\epsilon$. However, $\epsilon$ does not have any intrinsic meaning, and it is generally not possible to compare across variants of the framework. Work in image processing has therefore explored how to empirically calibrate noise across frameworks using Membership Inference Attacks (MIAs). However, this kind of calibration has not been established for NLP. In this paper, we show that MIAs offer little help in calibrating privacy, whereas reconstruction attacks are more useful. As a use case, we define a novel kind of directional privacy based on the von Mises-Fisher (VMF) distribution, a metric DP mechanism that perturbs angular distance rather than adding (isotropic) Gaussian noise, and apply this to NLP architectures. We show that, even though formal guarantees are incomparable, empirical privacy calibration reveals that each mechanism has different areas of strength with respect to utility-privacy trade-offs.

Country of Origin
🇦🇺 Australia


Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)