Score: 1

Unintended Memorization of Sensitive Information in Fine-Tuned Language Models

Published: January 24, 2026 | arXiv ID: 2601.17480v1

By: Marton Szep , Jorge Marin Ruiz , Georgios Kaissis and more

Potential Business Impact:

Protects private info in AI's memory.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Fine-tuning Large Language Models (LLMs) on sensitive datasets carries a substantial risk of unintended memorization and leakage of Personally Identifiable Information (PII), which can violate privacy regulations and compromise individual safety. In this work, we systematically investigate a critical and underexplored vulnerability: the exposure of PII that appears only in model inputs, not in training targets. Using both synthetic and real-world datasets, we design controlled extraction probes to quantify unintended PII memorization and study how factors such as language, PII frequency, task type, and model size influence memorization behavior. We further benchmark four privacy-preserving approaches including differential privacy, machine unlearning, regularization, and preference alignment, evaluating their trade-offs between privacy and task performance. Our results show that post-training methods generally provide more consistent privacy-utility trade-offs, while differential privacy achieves strong reduction in leakage in specific settings, although it can introduce training instability. These findings highlight the persistent challenge of memorization in fine-tuned LLMs and emphasize the need for robust, scalable privacy-preserving techniques.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)