CGCE: Classifier-Guided Concept Erasure in Generative Models
By: Viet Nguyen, Vishal M. Patel
Potential Business Impact:
Stops AI from making bad pictures or videos.
Recent advancements in large-scale generative models have enabled the creation of high-quality images and videos, but have also raised significant safety concerns regarding the generation of unsafe content. To mitigate this, concept erasure methods have been developed to remove undesirable concepts from pre-trained models. However, existing methods remain vulnerable to adversarial attacks that can regenerate the erased content. Moreover, achieving robust erasure often degrades the model's generative quality for safe, unrelated concepts, creating a difficult trade-off between safety and performance. To address this challenge, we introduce Classifier-Guided Concept Erasure (CGCE), an efficient plug-and-play framework that provides robust concept erasure for diverse generative models without altering their original weights. CGCE uses a lightweight classifier operating on text embeddings to first detect and then refine prompts containing undesired concepts. This approach is highly scalable, allowing for multi-concept erasure by aggregating guidance from several classifiers. By modifying only unsafe embeddings at inference time, our method prevents harmful content generation while preserving the model's original quality on benign prompts. Extensive experiments show that CGCE achieves state-of-the-art robustness against a wide range of red-teaming attacks. Our approach also maintains high generative utility, demonstrating a superior balance between safety and performance. We showcase the versatility of CGCE through its successful application to various modern T2I and T2V models, establishing it as a practical and effective solution for safe generative AI.
Similar Papers
GrOCE:Graph-Guided Online Concept Erasure for Text-to-Image Diffusion Models
CV and Pattern Recognition
Removes bad ideas from AI art without ruining good ones.
Localized Concept Erasure for Text-to-Image Diffusion Models Using Training-Free Gated Low-Rank Adaptation
CV and Pattern Recognition
Stops AI from making bad pictures, keeps good ones.
TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image Diffusion Models
CV and Pattern Recognition
Stops AI from making bad pictures.