MacPrompt: Maraconic-guided Jailbreak against Text-to-Image Models
By: Xi Ye , Yiwen Liu , Lina Wang and more
Potential Business Impact:
Breaks AI image filters with tricky word tricks.
Text-to-image (T2I) models have raised increasing safety concerns due to their capacity to generate NSFW and other banned objects. To mitigate these risks, safety filters and concept removal techniques have been introduced to block inappropriate prompts or erase sensitive concepts from the models. However, all the existing defense methods are not well prepared to handle diverse adversarial prompts. In this work, we introduce MacPrompt, a novel black-box and cross-lingual attack that reveals previously overlooked vulnerabilities in T2I safety mechanisms. Unlike existing attacks that rely on synonym substitution or prompt obfuscation, MacPrompt constructs macaronic adversarial prompts by performing cross-lingual character-level recombination of harmful terms, enabling fine-grained control over both semantics and appearance. By leveraging this design, MacPrompt crafts prompts with high semantic similarity to the original harmful inputs (up to 0.96) while bypassing major safety filters (up to 100%). More critically, it achieves attack success rates as high as 92% for sex-related content and 90% for violence, effectively breaking even state-of-the-art concept removal defenses. These results underscore the pressing need to reassess the robustness of existing T2I safety mechanisms against linguistically diverse and fine-grained adversarial strategies.
Similar Papers
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization
Machine Learning (CS)
Makes AI create forbidden images by tricking filters.
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
Cryptography and Security
Makes AI art generators create forbidden images.
Anyone Can Jailbreak: Prompt-Based Attacks on LLMs and T2Is
CV and Pattern Recognition
Makes AI ignore rules with tricky words.