Score: 2

REMISVFU: Vertical Federated Unlearning via Representation Misdirection for Intermediate Output Feature

Published: December 11, 2025 | arXiv ID: 2512.10348v1

By: Wenhan Wu , Zhili He , Huanghuang Liang and more

Potential Business Impact:

Removes data from AI without hurting others.

Business Areas:
Image Recognition Data and Analytics, Software

Data-protection regulations such as the GDPR grant every participant in a federated system a right to be forgotten. Federated unlearning has therefore emerged as a research frontier, aiming to remove a specific party's contribution from the learned model while preserving the utility of the remaining parties. However, most unlearning techniques focus on Horizontal Federated Learning (HFL), where data are partitioned by samples. In contrast, Vertical Federated Learning (VFL) allows organizations that possess complementary feature spaces to train a joint model without sharing raw data. The resulting feature-partitioned architecture renders HFL-oriented unlearning methods ineffective. In this paper, we propose REMISVFU, a plug-and-play representation misdirection framework that enables fast, client-level unlearning in splitVFL systems. When a deletion request arrives, the forgetting party collapses its encoder output to a randomly sampled anchor on the unit sphere, severing the statistical link between its features and the global model. To maintain utility for the remaining parties, the server jointly optimizes a retention loss and a forgetting loss, aligning their gradients via orthogonal projection to eliminate destructive interference. Evaluations on public benchmarks show that REMISVFU suppresses back-door attack success to the natural class-prior level and sacrifices only about 2.5% points of clean accuracy, outperforming state-of-the-art baselines.

Country of Origin
🇨🇳 🇲🇴 Macao, China

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence