Beyond Memorization: Gradient Projection Enables Selective Learning in Diffusion Models
By: Divya Kothandaraman, Jaclyn Pytlarz
Memorization in large-scale text-to-image diffusion models poses significant security and intellectual property risks, enabling adversarial attribute extraction and the unauthorized reproduction of sensitive or proprietary features. While conventional dememorization techniques, such as regularization and data filtering, limit overfitting to specific training examples, they fail to systematically prevent the internalization of prohibited concept-level features. Simply discarding all images containing a sensitive feature wastes invaluable training data, necessitating a method for selective unlearning at the concept level. To address this, we introduce a Gradient Projection Framework designed to enforce a stringent requirement of concept-level feature exclusion. Our defense operates during backpropagation by systematically identifying and excising training signals aligned with embeddings of prohibited attributes. Specifically, we project each gradient update onto the orthogonal complement of the sensitive feature's embedding space, thereby zeroing out its influence on the model's weights. Our method integrates seamlessly into standard diffusion model training pipelines and complements existing defenses. We analyze our method against an adversary aiming for feature extraction. In extensive experiments, we demonstrate that our framework drastically reduces memorization while rigorously preserving generation quality and semantic fidelity. By reframing memorization control as selective learning, our approach establishes a new paradigm for IP-safe and privacy-preserving generative AI.
Similar Papers
Unconsciously Forget: Mitigating Memorization; Without Knowing What is being Memorized
CV and Pattern Recognition
Stops AI from copying art it learned from.
Provable Separations between Memorization and Generalization in Diffusion Models
Machine Learning (Stat)
Stops AI from copying its training pictures.
Provable Separations between Memorization and Generalization in Diffusion Models
Machine Learning (Stat)
Stops AI from copying its training pictures.