Unintended Memorization of Sensitive Information in Fine-Tuned Language Models
By: Marton Szep , Jorge Marin Ruiz , Georgios Kaissis and more
Potential Business Impact:
Protects private info in AI's memory.
Fine-tuning Large Language Models (LLMs) on sensitive datasets carries a substantial risk of unintended memorization and leakage of Personally Identifiable Information (PII), which can violate privacy regulations and compromise individual safety. In this work, we systematically investigate a critical and underexplored vulnerability: the exposure of PII that appears only in model inputs, not in training targets. Using both synthetic and real-world datasets, we design controlled extraction probes to quantify unintended PII memorization and study how factors such as language, PII frequency, task type, and model size influence memorization behavior. We further benchmark four privacy-preserving approaches including differential privacy, machine unlearning, regularization, and preference alignment, evaluating their trade-offs between privacy and task performance. Our results show that post-training methods generally provide more consistent privacy-utility trade-offs, while differential privacy achieves strong reduction in leakage in specific settings, although it can introduce training instability. These findings highlight the persistent challenge of memorization in fine-tuned LLMs and emphasize the need for robust, scalable privacy-preserving techniques.
Similar Papers
Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
Computation and Language
Keeps private info safe when computers learn.
Data-Free Privacy-Preserving for LLMs via Model Inversion and Selective Unlearning
Cryptography and Security
Removes private info from AI without training data.
UnPII: Unlearning Personally Identifiable Information with Quantifiable Exposure Risk
Machine Learning (CS)
Removes private info from AI smarter, safer.