Score: 0

MPRU: Modular Projection-Redistribution Unlearning as Output Filter for Classification Pipelines

Published: October 30, 2025 | arXiv ID: 2510.26230v1

By: Minyi Peng , Darian Gunamardi , Ivan Tjuawinata and more

Potential Business Impact:

Removes unwanted information from AI models easily.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

As a new and promising approach, existing machine unlearning (MU) works typically emphasize theoretical formulations or optimization objectives to achieve knowledge removal. However, when deployed in real-world scenarios, such solutions typically face scalability issues and have to address practical requirements such as full access to original datasets and model. In contrast to the existing approaches, we regard classification training as a sequential process where classes are learned sequentially, which we call \emph{inductive approach}. Unlearning can then be done by reversing the last training sequence. This is implemented by appending a projection-redistribution layer in the end of the model. Such an approach does not require full access to the original dataset or the model, addressing the challenges of existing methods. This enables modular and model-agnostic deployment as an output filter into existing classification pipelines with minimal alterations. We conducted multiple experiments across multiple datasets including image (CIFAR-10/100 using CNN-based model) and tabular datasets (Covertype using tree-based model). Experiment results show consistently similar output to a fully retrained model with a high computational cost reduction. This demonstrates the applicability, scalability, and system compatibility of our solution while maintaining the performance of the output in a more practical setting.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)