PerProb: Indirectly Evaluating Memorization in Large Language Models
By: Yihan Liao , Jacky Keung , Xiaoxue Ma and more
Potential Business Impact:
Finds if private data is in AI's memory.
The rapid advancement of Large Language Models (LLMs) has been driven by extensive datasets that may contain sensitive information, raising serious privacy concerns. One notable threat is the Membership Inference Attack (MIA), where adversaries infer whether a specific sample was used in model training. However, the true impact of MIA on LLMs remains unclear due to inconsistent findings and the lack of standardized evaluation methods, further complicated by the undisclosed nature of many LLM training sets. To address these limitations, we propose PerProb, a unified, label-free framework for indirectly assessing LLM memorization vulnerabilities. PerProb evaluates changes in perplexity and average log probability between data generated by victim and adversary models, enabling an indirect estimation of training-induced memory. Compared with prior MIA methods that rely on member/non-member labels or internal access, PerProb is independent of model and task, and applicable in both black-box and white-box settings. Through a systematic classification of MIA into four attack patterns, we evaluate PerProb's effectiveness across five datasets, revealing varying memory behaviors and privacy risks among LLMs. Additionally, we assess mitigation strategies, including knowledge distillation, early stopping, and differential privacy, demonstrating their effectiveness in reducing data leakage. Our findings offer a practical and generalizable framework for evaluating and improving LLM privacy.
Similar Papers
On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models
Machine Learning (CS)
Stops AI from accidentally sharing private secrets.
Membership Inference Attacks on Large-Scale Models: A Survey
Machine Learning (CS)
Finds if your private info trained AI.
Exposing and Defending Membership Leakage in Vulnerability Prediction Models
Cryptography and Security
Protects code-writing AI from spying on its training data.