Score: 1

Lacuna Inc. at SemEval-2025 Task 4: LoRA-Enhanced Influence-Based Unlearning for LLMs

Published: June 4, 2025 | arXiv ID: 2506.04044v1

By: Aleksey Kudelya, Alexander Shirnin

Potential Business Impact:

Removes bad info from AI without breaking it.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper describes LIBU (LoRA enhanced influence-based unlearning), an algorithm to solve the task of unlearning - removing specific knowledge from a large language model without retraining from scratch and compromising its overall utility (SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models). The algorithm combines classical \textit{influence functions} to remove the influence of the data from the model and \textit{second-order optimization} to stabilize the overall utility. Our experiments show that this lightweight approach is well applicable for unlearning LLMs in different kinds of task.

Country of Origin
🇷🇺 Russian Federation

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Computation and Language