Investigating Model Editing for Unlearning in Large Language Models
By: Shariqah Hossain, Lalana Kagal
Potential Business Impact:
Removes bad info from AI without breaking it.
Machine unlearning aims to remove unwanted information from a model, but many methods are inefficient for LLMs with large numbers of parameters or fail to fully remove the intended information without degrading performance on knowledge that should be retained. Model editing algorithms solve a similar problem of changing information in models, but they focus on redirecting inputs to a new target rather than removing that information altogether. In this work, we explore the editing algorithms ROME, IKE, and WISE and design new editing targets for an unlearning setting. Through this investigation, we show that model editing approaches can exceed baseline unlearning methods in terms of quality of forgetting depending on the setting. Like traditional unlearning techniques, they struggle to encapsulate the scope of what is to be unlearned without damage to the overall model performance.
Similar Papers
A Survey on Unlearning in Large Language Models
Computation and Language
Lets AI forget private or bad information.
UIPE: Enhancing LLM Unlearning by Removing Knowledge Related to Forgetting Targets
Computation and Language
Cleans harmful knowledge from AI without breaking it.
Unlearning Imperative: Securing Trustworthy and Responsible LLMs through Engineered Forgetting
Machine Learning (CS)
Lets AI forget private information when asked.