Unconsciously Forget: Mitigating Memorization; Without Knowing What is being Memorized
By: Er Jin , Yang Zhang , Yongli Mou and more
Potential Business Impact:
Stops AI from copying art it learned from.
Recent advances in generative models have demonstrated an exceptional ability to produce highly realistic images. However, previous studies show that generated images often resemble the training data, and this problem becomes more severe as the model size increases. Memorizing training data can lead to legal challenges, including copyright infringement, violations of portrait rights, and trademark violations. Existing approaches to mitigating memorization mainly focus on manipulating the denoising sampling process to steer image embeddings away from the memorized embedding space or employ unlearning methods that require training on datasets containing specific sets of memorized concepts. However, existing methods often incur substantial computational overhead during sampling, or focus narrowly on removing one or more groups of target concepts, imposing a significant limitation on their scalability. To understand and mitigate these problems, our work, UniForget, offers a new perspective on understanding the root cause of memorization. Our work demonstrates that specific parts of the model are responsible for copyrighted content generation. By applying model pruning, we can effectively suppress the probability of generating copyrighted content without targeting specific concepts while preserving the general generative capabilities of the model. Additionally, we show that our approach is both orthogonal and complementary to existing unlearning methods, thereby highlighting its potential to improve current unlearning and de-memorization techniques.
Similar Papers
Unlearning Imperative: Securing Trustworthy and Responsible LLMs through Engineered Forgetting
Machine Learning (CS)
Lets AI forget private information when asked.
Distill, Forget, Repeat: A Framework for Continual Unlearning in Text-to-Image Diffusion Models
Machine Learning (CS)
Removes unwanted data from AI without retraining.
Demystifying Foreground-Background Memorization in Diffusion Models
CV and Pattern Recognition
Finds if AI copied parts of its training pictures.