Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness
By: Rongzhe Wei , Peizhi Niu , Hans Hao-Hsun Hsu and more
Potential Business Impact:
Makes AI forget specific information, not just facts.
Machine unlearning techniques aim to mitigate unintended memorization in large language models (LLMs). However, existing approaches predominantly focus on the explicit removal of isolated facts, often overlooking latent inferential dependencies and the non-deterministic nature of knowledge within LLMs. Consequently, facts presumed forgotten may persist implicitly through correlated information. To address these challenges, we propose a knowledge unlearning evaluation framework that more accurately captures the implicit structure of real-world knowledge by representing relevant factual contexts as knowledge graphs with associated confidence scores. We further develop an inference-based evaluation protocol leveraging powerful LLMs as judges; these judges reason over the extracted knowledge subgraph to determine unlearning success. Our LLM judges utilize carefully designed prompts and are calibrated against human evaluations to ensure their trustworthiness and stability. Extensive experiments on our newly constructed benchmark demonstrate that our framework provides a more realistic and rigorous assessment of unlearning performance. Moreover, our findings reveal that current evaluation strategies tend to overestimate unlearning effectiveness. Our code is publicly available at https://github.com/Graph-COM/Knowledge_Unlearning.git.
Similar Papers
Leak@$k$: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding
Machine Learning (CS)
Makes AI forget private information reliably.
A Survey on Unlearning in Large Language Models
Computation and Language
Lets AI forget private or bad information.
Unlearning That Lasts: Utility-Preserving, Robust, and Almost Irreversible Forgetting in LLMs
Machine Learning (CS)
Removes bad info from AI, making it safer.