Score: 0

Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models

Published: August 10, 2025 | arXiv ID: 2508.14062v1

By: Badrinath Ramakrishnan, Akshaya Balaji

Potential Business Impact:

Keeps private info safe when computers learn.

Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse natural language processing tasks, but their tendency to memorize training data poses significant privacy risks, particularly during fine-tuning processes. This paper presents a comprehensive empirical analysis of data memorization in fine-tuned LLMs and introduces a novel multi-layered privacy protection framework. Through controlled experiments on modern LLM architectures including GPT-2, Phi-3, and Gemma-2, we demonstrate that fine-tuning with repeated sensitive data increases privacy leakage rates from baseline levels of 0-5% to 60-75%, representing a 64.2% average increase across tested models. We propose and rigorously evaluate four complementary privacy protection methods: semantic data deduplication, differential privacy during generation, entropy-based filtering, and pattern-based content filtering. Our experimental results show that these techniques can reduce data leakage to 0% while maintaining 94.7% of original model utility.

Page Count
14 pages

Category
Computer Science:
Computation and Language