$PC^2$: Politically Controversial Content Generation via Jailbreaking Attacks on GPT-based Text-to-Image Models
By: Wonwoo Choi , Minjae Seo , Minkyoo Song and more
Potential Business Impact:
Makes fake pictures of politicians easily.
The rapid evolution of text-to-image (T2I) models has enabled high-fidelity visual synthesis on a global scale. However, these advancements have introduced significant security risks, particularly regarding the generation of harmful content. Politically harmful content, such as fabricated depictions of public figures, poses severe threats when weaponized for fake news or propaganda. Despite its criticality, the robustness of current T2I safety filters against such politically motivated adversarial prompting remains underexplored. In response, we propose $PC^2$, the first black-box political jailbreaking framework for T2I models. It exploits a novel vulnerability where safety filters evaluate political sensitivity based on linguistic context. $PC^2$ operates through: (1) Identity-Preserving Descriptive Mapping to obfuscate sensitive keywords into neutral descriptions, and (2) Geopolitically Distal Translation to map these descriptions into fragmented, low-sensitivity languages. This strategy prevents filters from constructing toxic relationships between political entities within prompts, effectively bypassing detection. We construct a benchmark of 240 politically sensitive prompts involving 36 public figures. Evaluation on commercial T2I models, specifically GPT-series, shows that while all original prompts are blocked, $PC^2$ achieves attack success rates of up to 86%.
Similar Papers
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization
Machine Learning (CS)
Makes AI create forbidden images by tricking filters.
Reason2Attack: Jailbreaking Text-to-Image Models via LLM Reasoning
Cryptography and Security
Makes AI create bad pictures faster.
Anyone Can Jailbreak: Prompt-Based Attacks on LLMs and T2Is
CV and Pattern Recognition
Makes AI ignore rules with tricky words.