Score: 1

Investigating Model Editing for Unlearning in Large Language Models

Published: December 23, 2025 | arXiv ID: 2512.20794v1

By: Shariqah Hossain, Lalana Kagal

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Removes bad info from AI without breaking it.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Machine unlearning aims to remove unwanted information from a model, but many methods are inefficient for LLMs with large numbers of parameters or fail to fully remove the intended information without degrading performance on knowledge that should be retained. Model editing algorithms solve a similar problem of changing information in models, but they focus on redirecting inputs to a new target rather than removing that information altogether. In this work, we explore the editing algorithms ROME, IKE, and WISE and design new editing targets for an unlearning setting. Through this investigation, we show that model editing approaches can exceed baseline unlearning methods in terms of quality of forgetting depending on the setting. Like traditional unlearning techniques, they struggle to encapsulate the scope of what is to be unlearned without damage to the overall model performance.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Computation and Language