BLUR: A Benchmark for LLM Unlearning Robust to Forget-Retain Overlap
By: Shengyuan Hu , Neil Kale , Pratiksha Thaker and more
Potential Business Impact:
Makes AI forget bad things without messing up good things.
Machine unlearning has the potential to improve the safety of large language models (LLMs) by removing sensitive or harmful information post hoc. A key challenge in unlearning involves balancing between forget quality (effectively unlearning undesirable information) and retain quality (maintaining good performance on other, general tasks). Unfortunately, as we show, current LLM unlearning benchmarks contain highly disparate forget and retain sets -- painting a false picture of the effectiveness of LLM unlearning methods. This can be particularly problematic because it opens the door for benign perturbations, such as relearning attacks, to easily reveal supposedly unlearned knowledge once models are deployed. To address this, we present $\texttt{BLUR}$: a benchmark for LLM unlearning that provides more realistic scenarios of forget-retain overlap. $\texttt{BLUR}$ significantly expands on existing unlearning benchmarks by providing extended evaluation tasks, combined forget/retain queries, and relearning datasets of varying degrees of difficulty. Despite the benign nature of the queries considered, we find that the performance of existing methods drops significantly when evaluated on $\texttt{BLUR}$, with simple approaches performing better on average than more recent methods. These results highlight the importance of robust evaluation and suggest several important directions of future study. Our benchmark is publicly available at: https://huggingface.co/datasets/forgelab/BLUR
Similar Papers
BLUR: A Bi-Level Optimization Approach for LLM Unlearning
Machine Learning (CS)
Teaches AI to forget bad or wrong information.
iShumei-Chinchunmei at SemEval-2025 Task 4: A balanced forgetting and retention multi-task framework using effective unlearning loss
Computation and Language
Teaches computers to forget bad information.
OBLIVIATE: Robust and Practical Machine Unlearning for Large Language Models
Computation and Language
Cleans AI models of bad or private info.