Score: 2

BLUR: A Bi-Level Optimization Approach for LLM Unlearning

Published: June 9, 2025 | arXiv ID: 2506.08164v1

By: Hadi Reisizadeh , Jinghan Jia , Zhiqi Bu and more

Potential Business Impact:

Teaches AI to forget bad or wrong information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Enabling large language models (LLMs) to unlearn knowledge and capabilities acquired during training has proven vital for ensuring compliance with data regulations and promoting ethical practices in generative AI. Although there are growing interests in developing various unlearning algorithms, it remains unclear how to best formulate the unlearning problem. The most popular formulation uses a weighted sum of forget and retain loss, but it often leads to performance degradation due to the inherent trade-off between forget and retain losses. In this work, we argue that it is important to model the hierarchical structure of the unlearning problem, where the forget problem (which \textit{unlearns} certain knowledge and/or capabilities) takes priority over the retain problem (which preserves model utility). This hierarchical structure naturally leads to a bi-level optimization formulation where the lower-level objective focuses on minimizing the forget loss, while the upper-level objective aims to maintain the model's utility. Based on this new formulation, we propose a novel algorithm, termed Bi-Level UnleaRning (\texttt{BLUR}), which not only possesses strong theoretical guarantees but more importantly, delivers superior performance. In particular, our extensive experiments demonstrate that \texttt{BLUR} consistently outperforms all the state-of-the-art algorithms across various unlearning tasks, models, and metrics. Codes are available at https://github.com/OptimAI-Lab/BLURLLMUnlearning.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)