Multi-turn Jailbreaking Attack in Multi-Modal Large Language Models
By: Badhan Chandra Das , Md Tasnim Jawad , Joaquin Molto and more
Potential Business Impact:
Stops smart AI from being tricked by bad questions.
In recent years, the security vulnerabilities of Multi-modal Large Language Models (MLLMs) have become a serious concern in the Generative Artificial Intelligence (GenAI) research. These highly intelligent models, capable of performing multi-modal tasks with high accuracy, are also severely susceptible to carefully launched security attacks, such as jailbreaking attacks, which can manipulate model behavior and bypass safety constraints. This paper introduces MJAD-MLLMs, a holistic framework that systematically analyzes the proposed Multi-turn Jailbreaking Attacks and multi-LLM-based defense techniques for MLLMs. In this paper, we make three original contributions. First, we introduce a novel multi-turn jailbreaking attack to exploit the vulnerabilities of the MLLMs under multi-turn prompting. Second, we propose a novel fragment-optimized and multi-LLM defense mechanism, called FragGuard, to effectively mitigate jailbreaking attacks in the MLLMs. Third, we evaluate the efficacy of the proposed attacks and defenses through extensive experiments on several state-of-the-art (SOTA) open-source and closed-source MLLMs and benchmark datasets, and compare their performance with the existing techniques.
Similar Papers
Enhanced MLLM Black-Box Jailbreaking Attacks and Defenses
Cryptography and Security
Finds ways to trick smart AI with pictures.
Align is not Enough: Multimodal Universal Jailbreak Attack against Multimodal Large Language Models
Cryptography and Security
Makes AI models with pictures unsafe.
Many-Turn Jailbreaking
Computation and Language
Makes AI assistants say bad things longer.