Score: 0

Anyone Can Jailbreak: Prompt-Based Attacks on LLMs and T2Is

Published: July 29, 2025 | arXiv ID: 2507.21820v1

By: Ahmed B Mustafa , Zihan Ye , Yang Lu and more

Potential Business Impact:

Makes AI ignore rules with tricky words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite significant advancements in alignment and content moderation, large language models (LLMs) and text-to-image (T2I) systems remain vulnerable to prompt-based attacks known as jailbreaks. Unlike traditional adversarial examples requiring expert knowledge, many of today's jailbreaks are low-effort, high-impact crafted by everyday users with nothing more than cleverly worded prompts. This paper presents a systems-style investigation into how non-experts reliably circumvent safety mechanisms through techniques such as multi-turn narrative escalation, lexical camouflage, implication chaining, fictional impersonation, and subtle semantic edits. We propose a unified taxonomy of prompt-level jailbreak strategies spanning both text-output and T2I models, grounded in empirical case studies across popular APIs. Our analysis reveals that every stage of the moderation pipeline, from input filtering to output validation, can be bypassed with accessible strategies. We conclude by highlighting the urgent need for context-aware defenses that reflect the ease with which these jailbreaks can be reproduced in real-world settings.

Country of Origin
🇬🇧 United Kingdom

Page Count
18 pages

Category
Computer Science:
CV and Pattern Recognition