Score: 1

Behind the Mask: Benchmarking Camouflaged Jailbreaks in Large Language Models

Published: September 5, 2025 | arXiv ID: 2509.05471v1

By: Youjia Zheng, Mohammad Zandsalimy, Shanu Sushmita

Potential Business Impact:

Stops AI from being tricked by hidden bad instructions.

Business Areas:
Law Enforcement Government and Military, Privacy and Security

Large Language Models (LLMs) are increasingly vulnerable to a sophisticated form of adversarial prompting known as camouflaged jailbreaking. This method embeds malicious intent within seemingly benign language to evade existing safety mechanisms. Unlike overt attacks, these subtle prompts exploit contextual ambiguity and the flexible nature of language, posing significant challenges to current defense systems. This paper investigates the construction and impact of camouflaged jailbreak prompts, emphasizing their deceptive characteristics and the limitations of traditional keyword-based detection methods. We introduce a novel benchmark dataset, Camouflaged Jailbreak Prompts, containing 500 curated examples (400 harmful and 100 benign prompts) designed to rigorously stress-test LLM safety protocols. In addition, we propose a multi-faceted evaluation framework that measures harmfulness across seven dimensions: Safety Awareness, Technical Feasibility, Implementation Safeguards, Harmful Potential, Educational Value, Content Quality, and Compliance Score. Our findings reveal a stark contrast in LLM behavior: while models demonstrate high safety and content quality with benign inputs, they exhibit a significant decline in performance and safety when confronted with camouflaged jailbreak attempts. This disparity underscores a pervasive vulnerability, highlighting the urgent need for more nuanced and adaptive security strategies to ensure the responsible and robust deployment of LLMs in real-world applications.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡¦ Canada, United States

Page Count
21 pages

Category
Computer Science:
Cryptography and Security