A Neuro-inspired Interpretation of Unlearning in Large Language Models through Sample-level Unlearning Difficulty
By: Xiaohua Feng , Yuyuan Li , Chengye Wang and more
Potential Business Impact:
Helps computers forget specific information faster.
Driven by privacy protection laws and regulations, unlearning in Large Language Models (LLMs) is gaining increasing attention. However, current research often neglects the interpretability of the unlearning process, particularly concerning sample-level unlearning difficulty. Existing studies typically assume a uniform unlearning difficulty across samples. This simplification risks attributing the performance of unlearning algorithms to sample selection rather than the algorithm's design, potentially steering the development of LLM unlearning in the wrong direction. Thus, we investigate the relationship between LLM unlearning and sample characteristics, with a focus on unlearning difficulty. Drawing inspiration from neuroscience, we propose a Memory Removal Difficulty ($\mathrm{MRD}$) metric to quantify sample-level unlearning difficulty. Using $\mathrm{MRD}$, we analyze the characteristics of hard-to-unlearn versus easy-to-unlearn samples. Furthermore, we propose an $\mathrm{MRD}$-based weighted sampling method to optimize existing unlearning algorithms, which prioritizes easily forgettable samples, thereby improving unlearning efficiency and effectiveness. We validate the proposed metric and method using public benchmarks and datasets, with results confirming its effectiveness.
Similar Papers
A Survey on Unlearning in Large Language Models
Computation and Language
Lets AI forget private or bad information.
LLM Unlearning on Noisy Forget Sets: A Study of Incomplete, Rewritten, and Watermarked Data
Machine Learning (CS)
Cleans AI without needing perfect instructions.
Leak@$k$: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding
Machine Learning (CS)
Makes AI forget private information reliably.