A Survey on Unlearning in Large Language Models
By: Ruichen Qiu , Jiajun Tan , Jiayue Pu and more
Potential Business Impact:
Lets AI forget private or bad information.
The advancement of Large Language Models (LLMs) has revolutionized natural language processing, yet their training on massive corpora poses significant risks, including the memorization of sensitive personal data, copyrighted material, and knowledge that could facilitate malicious activities. To mitigate these issues and align with legal and ethical standards such as the "right to be forgotten", machine unlearning has emerged as a critical technique to selectively erase specific knowledge from LLMs without compromising their overall performance. This survey provides a systematic review of over 180 papers on LLM unlearning published since 2021, focusing exclusively on large-scale generative models. Distinct from prior surveys, we introduce novel taxonomies for both unlearning methods and evaluations. We clearly categorize methods into training-time, post-training, and inference-time based on the training stage at which unlearning is applied. For evaluations, we not only systematically compile existing datasets and metrics but also critically analyze their advantages, disadvantages, and applicability, providing practical guidance to the research community. In addition, we discuss key challenges and promising future research directions. Our comprehensive overview aims to inform and guide the ongoing development of secure and reliable LLMs.
Similar Papers
A Comprehensive Survey of Machine Unlearning Techniques for Large Language Models
Computation and Language
Cleans unwanted info from AI without retraining.
Unlearning Imperative: Securing Trustworthy and Responsible LLMs through Engineered Forgetting
Machine Learning (CS)
Lets AI forget private information when asked.
Not All Data Are Unlearned Equally
Computation and Language
Removes unwanted information from AI minds.