Score: 2

Bypassing Prompt Guards in Production with Controlled-Release Prompting

Published: October 2, 2025 | arXiv ID: 2510.01529v1

By: Jaiden Fairoze , Sanjam Garg , Keewoo Lee and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Breaks AI safety rules by tricking chatbots.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As large language models (LLMs) advance, ensuring AI safety and alignment is paramount. One popular approach is prompt guards, lightweight mechanisms designed to filter malicious queries while being easy to implement and update. In this work, we introduce a new attack that circumvents such prompt guards, highlighting their limitations. Our method consistently jailbreaks production models while maintaining response quality, even under the highly protected chat interfaces of Google Gemini (2.5 Flash/Pro), DeepSeek Chat (DeepThink), Grok (3), and Mistral Le Chat (Magistral). The attack exploits a resource asymmetry between the prompt guard and the main LLM, encoding a jailbreak prompt that lightweight guards cannot decode but the main model can. This reveals an attack surface inherent to lightweight prompt guards in modern LLM architectures and underscores the need to shift defenses from blocking malicious inputs to preventing malicious outputs. We additionally identify other critical alignment issues, such as copyrighted data extraction, training data extraction, and malicious response leakage during thinking.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
40 pages

Category
Computer Science:
Machine Learning (CS)