LUNE: Efficient LLM Unlearning via LoRA Fine-Tuning with Negative Examples
By: Yezi Liu , Hanning Chen , Wenjun Huang and more
Potential Business Impact:
Lets computers forget unwanted information easily.
Large language models (LLMs) possess vast knowledge acquired from extensive training corpora, but they often cannot remove specific pieces of information when needed, which makes it hard to handle privacy, bias mitigation, and knowledge correction. Traditional model unlearning approaches require computationally expensive fine-tuning or direct weight editing, making them impractical for real-world deployment. In this work, we introduce LoRA-based Unlearning with Negative Examples (LUNE), a lightweight framework that performs negative-only unlearning by updating only low-rank adapters while freezing the backbone, thereby localizing edits and avoiding disruptive global changes. Leveraging Low-Rank Adaptation (LoRA), LUNE targets intermediate representations to suppress (or replace) requested knowledge with an order-of-magnitude lower compute and memory than full fine-tuning or direct weight editing. Extensive experiments on multiple factual unlearning tasks show that LUNE: (I) achieves effectiveness comparable to full fine-tuning and memory-editing methods, and (II) reduces computational cost by about an order of magnitude.
Similar Papers
Recover-to-Forget: Gradient Reconstruction from LoRA for Efficient LLM Unlearning
Machine Learning (CS)
Removes bad info from AI without retraining.
LLM Unlearning Should Be Form-Independent
Computation and Language
Removes bad ideas from AI, even if phrased differently.
Lacuna Inc. at SemEval-2025 Task 4: LoRA-Enhanced Influence-Based Unlearning for LLMs
Computation and Language
Removes bad info from AI without breaking it.