Score: 0

From Domains to Instances: Dual-Granularity Data Synthesis for LLM Unlearning

Published: January 7, 2026 | arXiv ID: 2601.04278v1

By: Xiaoyu Xu , Minxin Du , Zitong Li and more

Potential Business Impact:

Teaches computers to forget specific information.

Business Areas:
Semantic Web Internet Services

Although machine unlearning is essential for removing private, harmful, or copyrighted content from LLMs, current benchmarks often fail to faithfully represent the true "forgetting scope" learned by the model. We formalize two distinct unlearning granularities, domain-level and instance-level, and propose BiForget, an automated framework for synthesizing high-quality forget sets. Unlike prior work relying on external generators, BiForget exploits the target model per se to elicit data that matches its internal knowledge distribution through seed-guided and adversarial prompting. Our experiments across diverse benchmarks show that it achieves a superior balance of relevance, diversity, and efficiency. Quantitatively, in the Harry Potter domain, it improves relevance by ${\sim}20$ and diversity by ${\sim}$0.05 while halving the total data size compared to SOTAs. Ultimately, it facilitates more robust forgetting and better utility preservation, providing a more rigorous foundation for evaluating LLM unlearning.

Country of Origin
🇭🇰 Hong Kong

Page Count
16 pages

Category
Computer Science:
Computation and Language