Score: 1

SoK: Machine Unlearning for Large Language Models

Published: June 10, 2025 | arXiv ID: 2506.09227v1

By: Jie Ren , Yue Xing , Yingqian Cui and more

BigTech Affiliations: IBM

Potential Business Impact:

Removes unwanted information from AI minds.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language model (LLM) unlearning has become a critical topic in machine learning, aiming to eliminate the influence of specific training data or knowledge without retraining the model from scratch. A variety of techniques have been proposed, including Gradient Ascent, model editing, and re-steering hidden representations. While existing surveys often organize these methods by their technical characteristics, such classifications tend to overlook a more fundamental dimension: the underlying intention of unlearning--whether it seeks to truly remove internal knowledge or merely suppress its behavioral effects. In this SoK paper, we propose a new taxonomy based on this intention-oriented perspective. Building on this taxonomy, we make three key contributions. First, we revisit recent findings suggesting that many removal methods may functionally behave like suppression, and explore whether true removal is necessary or achievable. Second, we survey existing evaluation strategies, identify limitations in current metrics and benchmarks, and suggest directions for developing more reliable and intention-aligned evaluations. Third, we highlight practical challenges--such as scalability and support for sequential unlearning--that currently hinder the broader deployment of unlearning methods. In summary, this work offers a comprehensive framework for understanding and advancing unlearning in generative AI, aiming to support future research and guide policy decisions around data removal and privacy.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)