Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
By: Badrinath Ramakrishnan, Akshaya Balaji
Potential Business Impact:
Keeps private info safe when computers learn.
Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse natural language processing tasks, but their tendency to memorize training data poses significant privacy risks, particularly during fine-tuning processes. This paper presents a comprehensive empirical analysis of data memorization in fine-tuned LLMs and introduces a novel multi-layered privacy protection framework. Through controlled experiments on modern LLM architectures including GPT-2, Phi-3, and Gemma-2, we demonstrate that fine-tuning with repeated sensitive data increases privacy leakage rates from baseline levels of 0-5% to 60-75%, representing a 64.2% average increase across tested models. We propose and rigorously evaluate four complementary privacy protection methods: semantic data deduplication, differential privacy during generation, entropy-based filtering, and pattern-based content filtering. Our experimental results show that these techniques can reduce data leakage to 0% while maintaining 94.7% of original model utility.
Similar Papers
Unintended Memorization of Sensitive Information in Fine-Tuned Language Models
Machine Learning (CS)
Protects private info in AI's memory.
Position: Privacy Is Not Just Memorization!
Cryptography and Security
Protects your secrets from smart computer programs.
Revisiting Privacy, Utility, and Efficiency Trade-offs when Fine-Tuning Large Language Models
Artificial Intelligence
Makes AI safer without slowing it down.