Reason2Attack: Jailbreaking Text-to-Image Models via LLM Reasoning
By: Chenyu Zhang , Lanjun Wang , Yiwen Ma and more
Potential Business Impact:
Makes AI create bad pictures faster.
Text-to-Image(T2I) models typically deploy safety filters to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attack methods manually design prompts for the LLM to generate adversarial prompts, which effectively bypass safety filters while producing sensitive images, exposing safety vulnerabilities of T2I models. However, due to the LLM's limited understanding of the T2I model and its safety filters, existing methods require numerous queries to achieve a successful attack, limiting their practical applicability. To address this issue, we propose Reason2Attack(R2A), which aims to enhance the LLM's reasoning capabilities in generating adversarial prompts by incorporating the jailbreaking attack into the post-training process of the LLM. Specifically, we first propose a CoT example synthesis pipeline based on Frame Semantics, which generates adversarial prompts by identifying related terms and corresponding context illustrations. Using CoT examples generated by the pipeline, we fine-tune the LLM to understand the reasoning path and format the output structure. Subsequently, we incorporate the jailbreaking attack task into the reinforcement learning process of the LLM and design an attack process reward that considers prompt length, prompt stealthiness, and prompt effectiveness, aiming to further enhance reasoning accuracy. Extensive experiments on various T2I models show that R2A achieves a better attack success ratio while requiring fewer queries than baselines. Moreover, our adversarial prompts demonstrate strong attack transferability across both open-source and commercial T2I models.
Similar Papers
Metaphor-based Jailbreaking Attacks on Text-to-Image Models
Cryptography and Security
Bypasses AI image filters with clever word tricks.
Anyone Can Jailbreak: Prompt-Based Attacks on LLMs and T2Is
CV and Pattern Recognition
Makes AI ignore rules with tricky words.
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
Cryptography and Security
Makes AI art generators create forbidden images.