Score: 2

When Style Breaks Safety: Defending Language Models Against Superficial Style Alignment

Published: June 9, 2025 | arXiv ID: 2506.07452v1

By: Yuxin Xiao , Sana Tonekaboni , Walter Gerych and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Makes AI safer from bad instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) can be prompted with specific styles (e.g., formatting responses as lists), including in jailbreak queries. Although these style patterns are semantically unrelated to the malicious intents behind jailbreak queries, their safety impact remains unclear. In this work, we seek to understand whether style patterns compromise LLM safety, how superficial style alignment increases model vulnerability, and how best to mitigate these risks during alignment. We evaluate 32 LLMs across seven jailbreak benchmarks, and find that malicious queries with style patterns inflate the attack success rate (ASR) for nearly all models. Notably, ASR inflation correlates with both the length of style patterns and the relative attention an LLM exhibits on them. We then investigate superficial style alignment, and find that fine-tuning with specific styles makes LLMs more vulnerable to jailbreaks of those same styles. Finally, we propose SafeStyle, a defense strategy that incorporates a small amount of safety training data augmented to match the distribution of style patterns in the fine-tuning data. Across three LLMs and five fine-tuning style settings, SafeStyle consistently outperforms baselines in maintaining LLM safety.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)