Memorization in Fine-Tuned Large Language Models
By: Danil Savine
Potential Business Impact:
Makes AI remember less private medical info.
This study investigates the mechanisms and factors influencing memorization in fine-tuned large language models (LLMs), with a focus on the medical domain due to its privacy-sensitive nature. We examine how different aspects of the fine-tuning process affect a model's propensity to memorize training data, using the PHEE dataset of pharmacovigilance events. Our research employs two main approaches: a membership inference attack to detect memorized data, and a generation task with prompted prefixes to assess verbatim reproduction. We analyze the impact of adapting different weight matrices in the transformer architecture, the relationship between perplexity and memorization, and the effect of increasing the rank in low-rank adaptation (LoRA) fine-tuning. Key findings include: (1) Value and Output matrices contribute more significantly to memorization compared to Query and Key matrices; (2) Lower perplexity in the fine-tuned model correlates with increased memorization; (3) Higher LoRA ranks lead to increased memorization, but with diminishing returns at higher ranks. These results provide insights into the trade-offs between model performance and privacy risks in fine-tuned LLMs. Our findings have implications for developing more effective and responsible strategies for adapting large language models while managing data privacy concerns.
Similar Papers
Memorization in Fine-Tuned Large Language Models
Computation and Language
Makes AI remember less private training info.
Impact of Fine-Tuning Methods on Memorization in Large Language Models
Computation and Language
Keeps private AI training data secret better.
Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
Computation and Language
Keeps private info safe when computers learn.