Metaphor-based Jailbreaking Attacks on Text-to-Image Models
By: Chenyu Zhang , Yiwen Ma , Lanjun Wang and more
Potential Business Impact:
Bypasses AI image filters with clever word tricks.
Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attacks have shown that adversarial prompts can effectively bypass these mechanisms and induce T2I models to produce sensitive content, revealing critical safety vulnerabilities. However, existing attack methods implicitly assume that the attacker knows the type of deployed defenses, which limits their effectiveness against unknown or diverse defense mechanisms. In this work, we introduce \textbf{MJA}, a \textbf{m}etaphor-based \textbf{j}ailbreaking \textbf{a}ttack method inspired by the Taboo game, aiming to effectively and efficiently attack diverse defense mechanisms without prior knowledge of their type by generating metaphor-based adversarial prompts. Specifically, MJA consists of two modules: an LLM-based multi-agent generation module~(MLAG) and an adversarial prompt optimization module~(APO). MLAG decomposes the generation of metaphor-based adversarial prompts into three subtasks: metaphor retrieval, context matching, and adversarial prompt generation. Subsequently, MLAG coordinates three LLM-based agents to generate diverse adversarial prompts by exploring various metaphors and contexts. To enhance attack efficiency, APO first trains a surrogate model to predict the attack results of adversarial prompts and then designs an acquisition strategy to adaptively identify optimal adversarial prompts. Extensive experiments on T2I models with various external and internal defense mechanisms demonstrate that MJA outperforms six baseline methods, achieving stronger attack performance while using fewer queries. Code is available in https://github.com/datar001/metaphor-based-jailbreaking-attack.
Similar Papers
Reason2Attack: Jailbreaking Text-to-Image Models via LLM Reasoning
Cryptography and Security
Makes AI create bad pictures faster.
Anyone Can Jailbreak: Prompt-Based Attacks on LLMs and T2Is
CV and Pattern Recognition
Makes AI ignore rules with tricky words.
from Benign import Toxic: Jailbreaking the Language Model via Adversarial Metaphors
Computation and Language
Makes AI say bad things using tricky word tricks.