Do Methods to Jailbreak and Defend LLMs Generalize Across Languages?
By: Berk Atil, Rebecca J. Passonneau, Fred Morstatter
Potential Business Impact:
Makes AI safer in all languages.
Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak attacks. While many jailbreaks and defenses exist, their cross-lingual generalization remains underexplored. This paper presents the first systematic multilingual evaluation of jailbreaks and defenses across ten languages -- spanning high-, medium-, and low-resource languages -- using six LLMs on HarmBench and AdvBench. We assess two jailbreak types: logical-expression-based and adversarial-prompt-based. For both types, attack success and defense robustness vary across languages: high-resource languages are safer under standard queries but more vulnerable to adversarial ones. Simple defenses can be effective, but are language- and model-dependent. These findings call for language-aware and cross-lingual safety benchmarks for LLMs.
Similar Papers
Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations
Cryptography and Security
Stops AI from saying bad or unsafe things.
Jailbreaking LLMs & VLMs: Mechanisms, Evaluation, and Unified Defense
Cryptography and Security
Stops AI from being tricked into saying bad things.
Jailbreaking Attacks vs. Content Safety Filters: How Far Are We in the LLM Safety Arms Race?
Cryptography and Security
Stops bad words from getting out of AI.