Concept Unlearning in Large Language Models via Self-Constructed Knowledge Triplets
By: Tomoya Yamashita , Yuuki Yamanaka , Masanori Yamada and more
Potential Business Impact:
Removes specific ideas from AI, not just words.
Machine Unlearning (MU) has recently attracted considerable attention as a solution to privacy and copyright issues in large language models (LLMs). Existing MU methods aim to remove specific target sentences from an LLM while minimizing damage to unrelated knowledge. However, these approaches require explicit target sentences and do not support removing broader concepts, such as persons or events. To address this limitation, we introduce Concept Unlearning (CU) as a new requirement for LLM unlearning. We leverage knowledge graphs to represent the LLM's internal knowledge and define CU as removing the forgetting target nodes and associated edges. This graph-based formulation enables a more intuitive unlearning and facilitates the design of more effective methods. We propose a novel method that prompts the LLM to generate knowledge triplets and explanatory sentences about the forgetting target and applies the unlearning process to these representations. Our approach enables more precise and comprehensive concept removal by aligning the unlearning process with the LLM's internal knowledge representations. Experiments on real-world and synthetic datasets demonstrate that our method effectively achieves concept-level unlearning while preserving unrelated knowledge.
Similar Papers
CoUn: Empowering Machine Unlearning via Contrastive Learning
Machine Learning (CS)
Removes bad data from computer brains.
SoK: Machine Unlearning for Large Language Models
Machine Learning (CS)
Removes unwanted information from AI minds.
Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness
Computation and Language
Makes AI forget specific information, not just facts.