LLM Unlearning using Gradient Ratio-Based Influence Estimation and Noise Injection
By: Ameya Anjarlekar, Sandeep Pombra
Potential Business Impact:
Removes specific data from AI without breaking it.
The growing legal and ethical scrutiny of large language models (LLMs) necessitates effective machine unlearning, particularly for sensitive or unauthorized data. Existing empirical methods often yield incomplete forgetting or unintended degradation of unrelated knowledge due to poor localization. In this work, we propose GRIN: a modular and targeted framework for LLM unlearning. GRIN introduces a novel gradient-ratio-based metric to identify parameters most responsible for memorizing forget data. We then perform selective noise injection into these parameters prior to fine-tuning, which improves unlearning performance while maintaining model utility. Finally, we propose new evaluation metrics tailored to the LLM setting and validate our approach on standard benchmarks such as TOFU, WMDP, and SafePKU.
Similar Papers
GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs
Machine Learning (CS)
Cleans AI brains without breaking other skills.
GRAIL: Gradient-Based Adaptive Unlearning for Privacy and Copyright in LLMs
Computation and Language
Removes private info from AI without breaking it.
LLM Unlearning on Noisy Forget Sets: A Study of Incomplete, Rewritten, and Watermarked Data
Machine Learning (CS)
Cleans AI without needing perfect instructions.