A Domain-Based Taxonomy of Jailbreak Vulnerabilities in Large Language Models
By: Carlos Peláez-González , Andrés Herrera-Poyatos , Cristina Zuheros and more
Potential Business Impact:
Makes AI safer by understanding how it breaks.
The study of large language models (LLMs) is a key area in open-world machine learning. Although LLMs demonstrate remarkable natural language processing capabilities, they also face several challenges, including consistency issues, hallucinations, and jailbreak vulnerabilities. Jailbreaking refers to the crafting of prompts that bypass alignment safeguards, leading to unsafe outputs that compromise the integrity of LLMs. This work specifically focuses on the challenge of jailbreak vulnerabilities and introduces a novel taxonomy of jailbreak attacks grounded in the training domains of LLMs. It characterizes alignment failures through generalization, objectives, and robustness gaps. Our primary contribution is a perspective on jailbreak, framed through the different linguistic domains that emerge during LLM training and alignment. This viewpoint highlights the limitations of existing approaches and enables us to classify jailbreak attacks on the basis of the underlying model deficiencies they exploit. Unlike conventional classifications that categorize attacks based on prompt construction methods (e.g., prompt templating), our approach provides a deeper understanding of LLM behavior. We introduce a taxonomy with four categories -- mismatched generalization, competing objectives, adversarial robustness, and mixed attacks -- offering insights into the fundamental nature of jailbreak vulnerabilities. Finally, we present key lessons derived from this taxonomic study.
Similar Papers
Do Methods to Jailbreak and Defend LLMs Generalize Across Languages?
Computation and Language
Makes AI safer in all languages.
From LLMs to MLLMs to Agents: A Survey of Emerging Paradigms in Jailbreak Attacks and Defenses within LLM Ecosystem
Cryptography and Security
Protects smart AI from being tricked.
Evolving Security in LLMs: A Study of Jailbreak Attacks and Defenses
Cryptography and Security
Makes AI safer from bad instructions.