Anyone Can Jailbreak: Prompt-Based Attacks on LLMs and T2Is
By: Ahmed B Mustafa , Zihan Ye , Yang Lu and more
Potential Business Impact:
Makes AI ignore rules with tricky words.
Despite significant advancements in alignment and content moderation, large language models (LLMs) and text-to-image (T2I) systems remain vulnerable to prompt-based attacks known as jailbreaks. Unlike traditional adversarial examples requiring expert knowledge, many of today's jailbreaks are low-effort, high-impact crafted by everyday users with nothing more than cleverly worded prompts. This paper presents a systems-style investigation into how non-experts reliably circumvent safety mechanisms through techniques such as multi-turn narrative escalation, lexical camouflage, implication chaining, fictional impersonation, and subtle semantic edits. We propose a unified taxonomy of prompt-level jailbreak strategies spanning both text-output and T2I models, grounded in empirical case studies across popular APIs. Our analysis reveals that every stage of the moderation pipeline, from input filtering to output validation, can be bypassed with accessible strategies. We conclude by highlighting the urgent need for context-aware defenses that reflect the ease with which these jailbreaks can be reproduced in real-world settings.
Similar Papers
NLP Methods for Detecting Novel LLM Jailbreaks and Keyword Analysis with BERT
Computation and Language
Stops AI from being tricked into saying bad things.
Machine Learning for Detection and Analysis of Novel LLM Jailbreaks
Computation and Language
Stops AI from being tricked into saying bad things.
Reason2Attack: Jailbreaking Text-to-Image Models via LLM Reasoning
Cryptography and Security
Makes AI create bad pictures faster.