Recover-to-Forget: Gradient Reconstruction from LoRA for Efficient LLM Unlearning
By: Yezi Liu , Hanning Chen , Wenjun Huang and more
Potential Business Impact:
Removes bad info from AI without retraining.
Unlearning in large foundation models (e.g., LLMs) is essential for enabling dynamic knowledge updates, enforcing data deletion rights, and correcting model behavior. However, existing unlearning methods often require full-model fine-tuning or access to the original training data, which limits their scalability and practicality. In this work, we introduce Recover-to-Forget (R2F), a novel framework for efficient unlearning in LLMs based on reconstructing full-model gradient directions from low-rank LoRA adapter updates. Rather than performing backpropagation through the full model, we compute gradients with respect to LoRA parameters using multiple paraphrased prompts and train a gradient decoder to approximate the corresponding full-model gradients. To ensure applicability to larger or black-box models, the decoder is trained on a proxy model and transferred to target models. We provide a theoretical analysis of cross-model generalization and demonstrate that our method achieves effective unlearning while preserving general model performance. Experimental results demonstrate that R2F offers a scalable and lightweight alternative for unlearning in pretrained LLMs without requiring full retraining or access to internal parameters.
Similar Papers
LUNE: Efficient LLM Unlearning via LoRA Fine-Tuning with Negative Examples
Machine Learning (CS)
Lets computers forget unwanted information easily.
UnGuide: Learning to Forget with LoRA-Guided Diffusion Models
CV and Pattern Recognition
Removes bad ideas from AI art generators.
RapidUn: Influence-Driven Parameter Reweighting for Efficient Large Language Model Unlearning
Computation and Language
Teaches AI to forget bad information quickly.