Score: 0

UCD: Unlearning in LLMs via Contrastive Decoding

Published: June 12, 2025 | arXiv ID: 2506.12097v1

By: Vinith M. Suriyakumar, Ayush Sekhari, Ashia Wilson

Potential Business Impact:

Removes bad info from AI without breaking it.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Machine unlearning aims to remove specific information, e.g. sensitive or undesirable content, from large language models (LLMs) while preserving overall performance. We propose an inference-time unlearning algorithm that uses contrastive decoding, leveraging two auxiliary smaller models, one trained without the forget set and one trained with it, to guide the outputs of the original model using their difference during inference. Our strategy substantially improves the tradeoff between unlearning effectiveness and model utility. We evaluate our approach on two unlearning benchmarks, TOFU and MUSE. Results show notable gains in both forget quality and retained performance in comparison to prior approaches, suggesting that incorporating contrastive decoding can offer an efficient, practical avenue for unlearning concepts in large-scale models.

Page Count
27 pages

Category
Computer Science:
Computation and Language