Unlearning vs. Obfuscation: Are We Truly Removing Knowledge?
By: Guangzhi Sun , Potsawee Manakul , Xiao Zhan and more
Potential Business Impact:
Removes unwanted info from AI, making it safer.
Unlearning has emerged as a critical capability for large language models (LLMs) to support data privacy, regulatory compliance, and ethical AI deployment. Recent techniques often rely on obfuscation by injecting incorrect or irrelevant information to suppress knowledge. Such methods effectively constitute knowledge addition rather than true removal, often leaving models vulnerable to probing. In this paper, we formally distinguish unlearning from obfuscation and introduce a probing-based evaluation framework to assess whether existing approaches genuinely remove targeted information. Moreover, we propose DF-MCQ, a novel unlearning method that flattens the model predictive distribution over automatically generated multiple-choice questions using KL-divergence, effectively removing knowledge about target individuals and triggering appropriate refusal behaviour. Experimental results demonstrate that DF-MCQ achieves unlearning with over 90% refusal rate and a random choice-level uncertainty that is much higher than obfuscation on probing questions.
Similar Papers
SoK: Machine Unlearning for Large Language Models
Machine Learning (CS)
Removes unwanted information from AI minds.
A Survey on Unlearning in Large Language Models
Computation and Language
Lets AI forget private or bad information.
Leak@$k$: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding
Machine Learning (CS)
Makes AI forget private information reliably.