SoK: Machine Unlearning for Large Language Models
By: Jie Ren , Yue Xing , Yingqian Cui and more
Potential Business Impact:
Removes unwanted information from AI minds.
Large language model (LLM) unlearning has become a critical topic in machine learning, aiming to eliminate the influence of specific training data or knowledge without retraining the model from scratch. A variety of techniques have been proposed, including Gradient Ascent, model editing, and re-steering hidden representations. While existing surveys often organize these methods by their technical characteristics, such classifications tend to overlook a more fundamental dimension: the underlying intention of unlearning--whether it seeks to truly remove internal knowledge or merely suppress its behavioral effects. In this SoK paper, we propose a new taxonomy based on this intention-oriented perspective. Building on this taxonomy, we make three key contributions. First, we revisit recent findings suggesting that many removal methods may functionally behave like suppression, and explore whether true removal is necessary or achievable. Second, we survey existing evaluation strategies, identify limitations in current metrics and benchmarks, and suggest directions for developing more reliable and intention-aligned evaluations. Third, we highlight practical challenges--such as scalability and support for sequential unlearning--that currently hinder the broader deployment of unlearning methods. In summary, this work offers a comprehensive framework for understanding and advancing unlearning in generative AI, aiming to support future research and guide policy decisions around data removal and privacy.
Similar Papers
A Comprehensive Survey of Machine Unlearning Techniques for Large Language Models
Computation and Language
Cleans unwanted info from AI without retraining.
A Survey on Unlearning in Large Language Models
Computation and Language
Lets AI forget private or bad information.
Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy
Cryptography and Security
Makes AI forget bad things without breaking good things.