DefenSee: Dissecting Threat from Sight and Text - A Multi-View Defensive Pipeline for Multi-modal Jailbreaks
By: Zihao Wang, Kar Wai Fok, Vrizlynn L. L. Thing
Potential Business Impact:
Stops AI from being tricked by bad pictures.
Multi-modal large language models (MLLMs), capable of processing text, images, and audio, have been widely adopted in various AI applications. However, recent MLLMs integrating images and text remain highly vulnerable to coordinated jailbreaks. Existing defenses primarily focus on the text, lacking robust multi-modal protection. As a result, studies indicate that MLLMs are more susceptible to malicious or unsafe instructions, unlike their text-only counterparts. In this paper, we proposed DefenSee, a robust and lightweight multi-modal black-box defense technique that leverages image variants transcription and cross-modal consistency checks, mimicking human judgment. Experiments on popular multi-modal jailbreak and benign datasets show that DefenSee consistently enhances MLLM robustness while better preserving performance on benign tasks compared to SOTA defenses. It reduces the ASR of jailbreak attacks to below 1.70% on MiniGPT4 using the MM-SafetyBench benchmark, significantly outperforming prior methods under the same conditions.
Similar Papers
Enhanced MLLM Black-Box Jailbreaking Attacks and Defenses
Cryptography and Security
Finds ways to trick smart AI with pictures.
Beyond Text: Multimodal Jailbreaking of Vision-Language and Audio Models through Perceptually Simple Transformations
Cryptography and Security
Tricks AI into showing bad stuff using pictures.
Align is not Enough: Multimodal Universal Jailbreak Attack against Multimodal Large Language Models
Cryptography and Security
Makes AI models with pictures unsafe.