VideoEraser: Concept Erasure in Text-to-Video Diffusion Models
By: Naen Xu , Jinghuai Zhang , Changjiang Li and more
Potential Business Impact:
Stops AI from making bad videos.
The rapid growth of text-to-video (T2V) diffusion models has raised concerns about privacy, copyright, and safety due to their potential misuse in generating harmful or misleading content. These models are often trained on numerous datasets, including unauthorized personal identities, artistic creations, and harmful materials, which can lead to uncontrolled production and distribution of such content. To address this, we propose VideoEraser, a training-free framework that prevents T2V diffusion models from generating videos with undesirable concepts, even when explicitly prompted with those concepts. Designed as a plug-and-play module, VideoEraser can seamlessly integrate with representative T2V diffusion models via a two-stage process: Selective Prompt Embedding Adjustment (SPEA) and Adversarial-Resilient Noise Guidance (ARNG). We conduct extensive evaluations across four tasks, including object erasure, artistic style erasure, celebrity erasure, and explicit content erasure. Experimental results show that VideoEraser consistently outperforms prior methods regarding efficacy, integrity, fidelity, robustness, and generalizability. Notably, VideoEraser achieves state-of-the-art performance in suppressing undesirable content during T2V generation, reducing it by 46% on average across four tasks compared to baselines.
Similar Papers
VideoEraser: Concept Erasure in Text-to-Video Diffusion Models
CV and Pattern Recognition
Stops AI from making bad videos.
Now You See It, Now You Don't - Instant Concept Erasure for Safe Text-to-Image and Video Generation
CV and Pattern Recognition
Removes unwanted things from AI-generated pictures and videos.
Bi-Erasing: A Bidirectional Framework for Concept Removal in Diffusion Models
CV and Pattern Recognition
Removes bad pictures, keeps good ones.