Do Methods to Jailbreak and Defend LLMs Generalize Across Languages?
By: Berk Atil, Rebecca J. Passonneau, Fred Morstatter
Potential Business Impact:
Makes AI safer in all languages.
Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak attacks. While many jailbreaks and defenses exist, their cross-lingual generalization remains underexplored. This paper presents the first systematic multilingual evaluation of jailbreaks and defenses across ten languages -- spanning high-, medium-, and low-resource languages -- using six LLMs on HarmBench and AdvBench. We assess two jailbreak types: logical-expression-based and adversarial-prompt-based. For both types, attack success and defense robustness vary across languages: high-resource languages are safer under standard queries but more vulnerable to adversarial ones. Simple defenses can be effective, but are language- and model-dependent. These findings call for language-aware and cross-lingual safety benchmarks for LLMs.
Similar Papers
Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations
Cryptography and Security
Stops AI from saying bad or unsafe things.
Evolving Security in LLMs: A Study of Jailbreak Attacks and Defenses
Cryptography and Security
Makes AI safer from bad instructions.
Uncovering the Persuasive Fingerprint of LLMs in Jailbreaking Attacks
Computation and Language
Makes AI more likely to follow bad instructions.