Training-Free Safe Text Embedding Guidance for Text-to-Image Diffusion Models
By: Byeonghu Na , Mina Kang , Jiseok Kwak and more
Potential Business Impact:
Makes AI art generators avoid making bad pictures.
Text-to-image models have recently made significant advances in generating realistic and semantically coherent images, driven by advanced diffusion models and large-scale web-crawled datasets. However, these datasets often contain inappropriate or biased content, raising concerns about the generation of harmful outputs when provided with malicious text prompts. We propose Safe Text embedding Guidance (STG), a training-free approach to improve the safety of diffusion models by guiding the text embeddings during sampling. STG adjusts the text embeddings based on a safety function evaluated on the expected final denoised image, allowing the model to generate safer outputs without additional training. Theoretically, we show that STG aligns the underlying model distribution with safety constraints, thereby achieving safer outputs while minimally affecting generation quality. Experiments on various safety scenarios, including nudity, violence, and artist-style removal, show that STG consistently outperforms both training-based and training-free baselines in removing unsafe content while preserving the core semantic intent of input prompts. Our code is available at https://github.com/aailab-kaist/STG.
Similar Papers
SafeGuider: Robust and Practical Content Safety Control for Text-to-Image Models
Cryptography and Security
Stops AI from making bad pictures from words.
SafeGuider: Robust and Practical Content Safety Control for Text-to-Image Models
Cryptography and Security
Stops AI from making bad pictures from words.
Detect-and-Guide: Self-regulation of Diffusion Models for Safe Text-to-Image Generation via Guideline Token Optimization
CV and Pattern Recognition
Stops AI from making bad pictures from words.