Privacy Auditing of Large Language Models
By: Ashwinee Panda , Xinyu Tang , Milad Nasr and more
Potential Business Impact:
Finds hidden private info in AI models.
Current techniques for privacy auditing of large language models (LLMs) have limited efficacy -- they rely on basic approaches to generate canaries which leads to weak membership inference attacks that in turn give loose lower bounds on the empirical privacy leakage. We develop canaries that are far more effective than those used in prior work under threat models that cover a range of realistic settings. We demonstrate through extensive experiments on multiple families of fine-tuned LLMs that our approach sets a new standard for detection of privacy leakage. For measuring the memorization rate of non-privately trained LLMs, our designed canaries surpass prior approaches. For example, on the Qwen2.5-0.5B model, our designed canaries achieve $49.6\%$ TPR at $1\%$ FPR, vastly surpassing the prior approach's $4.2\%$ TPR at $1\%$ FPR. Our method can be used to provide a privacy audit of $\varepsilon \approx 1$ for a model trained with theoretical $\varepsilon$ of 4. To the best of our knowledge, this is the first time that a privacy audit of LLM training has achieved nontrivial auditing success in the setting where the attacker cannot train shadow models, insert gradient canaries, or access the model at every iteration.
Similar Papers
A Survey on Privacy Risks and Protection in Large Language Models
Cryptography and Security
Keeps your secrets safe from smart computer programs.
Black-Box Auditing of Quantum Model: Lifted Differential Privacy with Quantum Canaries
Machine Learning (CS)
Finds if quantum computers steal private data.
Auditing Pay-Per-Token in Large Language Models
Cryptography and Security
Catches companies cheating on AI service costs.