Score: 1

Learning-Time Encoding Shapes Unlearning in LLMs

Published: June 18, 2025 | arXiv ID: 2506.15076v1

By: Ruihan Wu, Konstantin Garov, Kamalika Chaudhuri

Potential Business Impact:

Teaches computers to forget bad or wrong information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As large language models (LLMs) are increasingly deployed in the real world, the ability to ``unlearn'', or remove specific pieces of knowledge post hoc, has become essential for a variety of reasons ranging from privacy regulations to correcting outdated or harmful content. Prior work has proposed unlearning benchmarks and algorithms, and has typically assumed that the training process and the target model are fixed. In this work, we empirically investigate how learning-time choices in knowledge encoding impact the effectiveness of unlearning factual knowledge. Our experiments reveal two key findings: (1) learning with paraphrased descriptions improves unlearning performance and (2) unlearning individual piece of knowledge from a chunk of text is challenging. Our results suggest that learning-time knowledge encoding may play a central role in enabling reliable post-hoc unlearning.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Computation and Language