PLA: Prompt Learning Attack against Text-to-Image Generative Models
By: Xinqi Lyu , Yihao Liu , Yanjie Li and more
Potential Business Impact:
Makes AI create forbidden pictures.
Text-to-Image (T2I) models have gained widespread adoption across various applications. Despite the success, the potential misuse of T2I models poses significant risks of generating Not-Safe-For-Work (NSFW) content. To investigate the vulnerability of T2I models, this paper delves into adversarial attacks to bypass the safety mechanisms under black-box settings. Most previous methods rely on word substitution to search adversarial prompts. Due to limited search space, this leads to suboptimal performance compared to gradient-based training. However, black-box settings present unique challenges to training gradient-driven attack methods, since there is no access to the internal architecture and parameters of T2I models. To facilitate the learning of adversarial prompts in black-box settings, we propose a novel prompt learning attack framework (PLA), where insightful gradient-based training tailored to black-box T2I models is designed by utilizing multimodal similarities. Experiments show that our new method can effectively attack the safety mechanisms of black-box T2I models including prompt filters and post-hoc safety checkers with a high success rate compared to state-of-the-art methods. Warning: This paper may contain offensive model-generated content.
Similar Papers
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
Cryptography and Security
Makes AI art generators create forbidden images.
GenBreak: Red Teaming Text-to-Image Generators Using Large Language Models
Cryptography and Security
Finds ways to make AI create bad pictures.
Iterative Prompt Refinement for Safer Text-to-Image Generation
CV and Pattern Recognition
Makes AI art safer by checking pictures and words.