Score: 0

Reveal and Release: Iterative LLM Unlearning with Self-generated Data

Published: September 18, 2025 | arXiv ID: 2509.14624v1

By: Linxi Xie , Xin Teng , Shichang Ke and more

Potential Business Impact:

Teaches computers to forget private or bad information.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large language model (LLM) unlearning has demonstrated effectiveness in removing the influence of undesirable data (also known as forget data). Existing approaches typically assume full access to the forget dataset, overlooking two key challenges: (1) Forget data is often privacy-sensitive, rare, or legally regulated, making it expensive or impractical to obtain (2) The distribution of available forget data may not align with how that information is represented within the model. To address these limitations, we propose a ``Reveal-and-Release'' method to unlearn with self-generated data, where we prompt the model to reveal what it knows using optimized instructions. To fully utilize the self-generated forget data, we propose an iterative unlearning framework, where we make incremental adjustments to the model's weight space with parameter-efficient modules trained on the forget data. Experimental results demonstrate that our method balances the tradeoff between forget quality and utility preservation.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Computation and Language