Score: 1

Multi-turn Jailbreaking Attack in Multi-Modal Large Language Models

Published: January 8, 2026 | arXiv ID: 2601.05339v1

By: Badhan Chandra Das , Md Tasnim Jawad , Joaquin Molto and more

Potential Business Impact:

Stops smart AI from being tricked by bad questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In recent years, the security vulnerabilities of Multi-modal Large Language Models (MLLMs) have become a serious concern in the Generative Artificial Intelligence (GenAI) research. These highly intelligent models, capable of performing multi-modal tasks with high accuracy, are also severely susceptible to carefully launched security attacks, such as jailbreaking attacks, which can manipulate model behavior and bypass safety constraints. This paper introduces MJAD-MLLMs, a holistic framework that systematically analyzes the proposed Multi-turn Jailbreaking Attacks and multi-LLM-based defense techniques for MLLMs. In this paper, we make three original contributions. First, we introduce a novel multi-turn jailbreaking attack to exploit the vulnerabilities of the MLLMs under multi-turn prompting. Second, we propose a novel fragment-optimized and multi-LLM defense mechanism, called FragGuard, to effectively mitigate jailbreaking attacks in the MLLMs. Third, we evaluate the efficacy of the proposed attacks and defenses through extensive experiments on several state-of-the-art (SOTA) open-source and closed-source MLLMs and benchmark datasets, and compare their performance with the existing techniques.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
18 pages

Category
Computer Science:
Cryptography and Security