ForgetMe: Evaluating Selective Forgetting in Generative Models
By: Zhenyu Yu, Mohd Yamani Inda Idris, Pei Wang
Potential Business Impact:
Removes private data from AI art safely.
The widespread adoption of diffusion models in image generation has increased the demand for privacy-compliant unlearning. However, due to the high-dimensional nature and complex feature representations of diffusion models, achieving selective unlearning remains challenging, as existing methods struggle to remove sensitive information while preserving the consistency of non-sensitive regions. To address this, we propose an Automatic Dataset Creation Framework based on prompt-based layered editing and training-free local feature removal, constructing the ForgetMe dataset and introducing the Entangled evaluation metric. The Entangled metric quantifies unlearning effectiveness by assessing the similarity and consistency between the target and background regions and supports both paired (Entangled-D) and unpaired (Entangled-S) image data, enabling unsupervised evaluation. The ForgetMe dataset encompasses a diverse set of real and synthetic scenarios, including CUB-200-2011 (Birds), Stanford-Dogs, ImageNet, and a synthetic cat dataset. We apply LoRA fine-tuning on Stable Diffusion to achieve selective unlearning on this dataset and validate the effectiveness of both the ForgetMe dataset and the Entangled metric, establishing them as benchmarks for selective unlearning. Our work provides a scalable and adaptable solution for advancing privacy-preserving generative AI.
Similar Papers
Synthetic Forgetting without Access: A Few-shot Zero-glance Framework for Machine Unlearning
Machine Learning (CS)
Removes private data from AI without original data.
MedForget: Hierarchy-Aware Multimodal Unlearning Testbed for Medical AI
CV and Pattern Recognition
Makes AI forget patient data without losing skill.
Reveal and Release: Iterative LLM Unlearning with Self-generated Data
Computation and Language
Teaches computers to forget private or bad information.