Say It Differently: Linguistic Styles as Jailbreak Vectors
By: Srikant Panda, Avinash Rai
Potential Business Impact:
Makes AI safer by spotting tricky wording.
Large Language Models (LLMs) are commonly evaluated for robustness against paraphrased or semantically equivalent jailbreak prompts, yet little attention has been paid to linguistic variation as an attack surface. In this work, we systematically study how linguistic styles such as fear or curiosity can reframe harmful intent and elicit unsafe responses from aligned models. We construct style-augmented jailbreak benchmark by transforming prompts from 3 standard datasets into 11 distinct linguistic styles using handcrafted templates and LLM-based rewrites, while preserving semantic intent. Evaluating 16 open- and close-source instruction-tuned models, we find that stylistic reframing increases jailbreak success rates by up to +57 percentage points. Styles such as fearful, curious and compassionate are most effective and contextualized rewrites outperform templated variants. To mitigate this, we introduce a style neutralization preprocessing step using a secondary LLM to strip manipulative stylistic cues from user inputs, significantly reducing jailbreak success rates. Our findings reveal a systemic and scaling-resistant vulnerability overlooked in current safety pipelines.
Similar Papers
Uncovering the Persuasive Fingerprint of LLMs in Jailbreaking Attacks
Computation and Language
Makes AI more likely to follow bad instructions.
When Style Breaks Safety: Defending Language Models Against Superficial Style Alignment
Machine Learning (CS)
Makes AI safer from bad instructions.
Behind the Mask: Benchmarking Camouflaged Jailbreaks in Large Language Models
Cryptography and Security
Stops AI from being tricked by hidden bad instructions.