Dual-Phase Federated Deep Unlearning via Weight-Aware Rollback and Reconstruction
By: Changjun Zhou , Jintao Zheng , Leyou Yang and more
Federated Unlearning (FUL) focuses on client data and computing power to offer a privacy-preserving solution. However, high computational demands, complex incentive mechanisms, and disparities in client-side computing power often lead to long times and higher costs. To address these challenges, many existing methods rely on server-side knowledge distillation that solely removes the updates of the target client, overlooking the privacy embedded in the contributions of other clients, which can lead to privacy leakage. In this work, we introduce DPUL, a novel server-side unlearning method that deeply unlearns all influential weights to prevent privacy pitfalls. Our approach comprises three components: (i) identifying high-weight parameters by filtering client update magnitudes, and rolling them back to ensure deep removal. (ii) leveraging the variational autoencoder (VAE) to reconstruct and eliminate low-weight parameters. (iii) utilizing a projection-based technique to recover the model. Experimental results on four datasets demonstrate that DPUL surpasses state-of-the-art baselines, providing a 1%-5% improvement in accuracy and up to 12x reduction in time cost.
Similar Papers
Tackling Federated Unlearning as a Parameter Estimation Problem
Machine Learning (CS)
Erases private data from AI without retraining.
Fully Decentralized Certified Unlearning
Machine Learning (CS)
Removes private data from AI without retraining.
REMISVFU: Vertical Federated Unlearning via Representation Misdirection for Intermediate Output Feature
Artificial Intelligence
Removes data from AI without hurting others.