Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
By: Badrinath Ramakrishnan, Akshaya Balaji
Potential Business Impact:
Keeps private info safe when computers learn.
Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse natural language processing tasks, but their tendency to memorize training data poses significant privacy risks, particularly during fine-tuning processes. This paper presents a comprehensive empirical analysis of data memorization in fine-tuned LLMs and introduces a novel multi-layered privacy protection framework. Through controlled experiments on modern LLM architectures including GPT-2, Phi-3, and Gemma-2, we demonstrate that fine-tuning with repeated sensitive data increases privacy leakage rates from baseline levels of 0-5% to 60-75%, representing a 64.2% average increase across tested models. We propose and rigorously evaluate four complementary privacy protection methods: semantic data deduplication, differential privacy during generation, entropy-based filtering, and pattern-based content filtering. Our experimental results show that these techniques can reduce data leakage to 0% while maintaining 94.7% of original model utility.
Similar Papers
Position: Privacy Is Not Just Memorization!
Cryptography and Security
Protects your secrets from smart computer programs.
Private Memorization Editing: Turning Memorization into a Defense to Strengthen Data Privacy in Large Language Models
Cryptography and Security
Stops computers from accidentally sharing private secrets.
Beyond Data Privacy: New Privacy Risks for Large Language Models
Cryptography and Security
Protects your secrets from smart computer programs.