From Domains to Instances: Dual-Granularity Data Synthesis for LLM Unlearning
By: Xiaoyu Xu , Minxin Du , Zitong Li and more
Potential Business Impact:
Teaches computers to forget specific information.
Although machine unlearning is essential for removing private, harmful, or copyrighted content from LLMs, current benchmarks often fail to faithfully represent the true "forgetting scope" learned by the model. We formalize two distinct unlearning granularities, domain-level and instance-level, and propose BiForget, an automated framework for synthesizing high-quality forget sets. Unlike prior work relying on external generators, BiForget exploits the target model per se to elicit data that matches its internal knowledge distribution through seed-guided and adversarial prompting. Our experiments across diverse benchmarks show that it achieves a superior balance of relevance, diversity, and efficiency. Quantitatively, in the Harry Potter domain, it improves relevance by ${\sim}20$ and diversity by ${\sim}$0.05 while halving the total data size compared to SOTAs. Ultimately, it facilitates more robust forgetting and better utility preservation, providing a more rigorous foundation for evaluating LLM unlearning.
Similar Papers
LLM Unlearning Without an Expert Curated Dataset
Computation and Language
Teaches computers to forget bad or secret stuff.
Reveal and Release: Iterative LLM Unlearning with Self-generated Data
Computation and Language
Teaches computers to forget private or bad information.
MedForget: Hierarchy-Aware Multimodal Unlearning Testbed for Medical AI
CV and Pattern Recognition
Makes AI forget patient data without losing skill.