Score: 1

Module-Aware Parameter-Efficient Machine Unlearning on Transformers

Published: August 24, 2025 | arXiv ID: 2508.17233v1

By: Wenjie Bao , Jian Lou , Yuke Hu and more

Potential Business Impact:

Removes unwanted data from AI without breaking it.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Transformer has become fundamental to a vast series of pre-trained large models that have achieved remarkable success across diverse applications. Machine unlearning, which focuses on efficiently removing specific data influences to comply with privacy regulations, shows promise in restricting updates to influence-critical parameters. However, existing parameter-efficient unlearning methods are largely devised in a module-oblivious manner, which tends to inaccurately identify these parameters and leads to inferior unlearning performance for Transformers. In this paper, we propose {\tt MAPE-Unlearn}, a module-aware parameter-efficient machine unlearning approach that uses a learnable pair of masks to pinpoint influence-critical parameters in the heads and filters of Transformers. The learning objective of these masks is derived by desiderata of unlearning and optimized through an efficient algorithm featured by a greedy search with a warm start. Extensive experiments on various Transformer models and datasets demonstrate the effectiveness and robustness of {\tt MAPE-Unlearn} for unlearning.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ China, United States

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)