Data-Free Privacy-Preserving for LLMs via Model Inversion and Selective Unlearning
By: Xinjie Zhou , Zhihui Yang , Lechao Cheng and more
Potential Business Impact:
Removes private info from AI without training data.
Large language models (LLMs) exhibit powerful capabilities but risk memorizing sensitive personally identifiable information (PII) from their training data, posing significant privacy concerns. While machine unlearning techniques aim to remove such data, they predominantly depend on access to the training data. This requirement is often impractical, as training data in real-world deployments is commonly proprietary or inaccessible. To address this limitation, we propose Data-Free Selective Unlearning (DFSU), a novel privacy-preserving framework that removes sensitive PII from an LLM without requiring its training data. Our approach first synthesizes pseudo-PII through language model inversion, then constructs token-level privacy masks for these synthetic samples, and finally performs token-level selective unlearning via a contrastive mask loss within a low-rank adaptation (LoRA) subspace. Extensive experiments on the AI4Privacy PII-Masking dataset using Pythia models demonstrate that our method effectively removes target PII while maintaining model utility.
Similar Papers
Unintended Memorization of Sensitive Information in Fine-Tuned Language Models
Machine Learning (CS)
Protects private info in AI's memory.
Shadow Unlearning: A Neuro-Semantic Approach to Fidelity-Preserving Faceless Forgetting in LLMs
Cryptography and Security
Removes private data from AI without seeing it.
Privacy Preservation through Practical Machine Unlearning
Machine Learning (CS)
Removes private info from AI without retraining.