VisualDAN: Exposing Vulnerabilities in VLMs with Visual-Driven DAN Commands
By: Aofan Liu, Lulu Tang
Potential Business Impact:
Makes AI show bad things even when told not to.
Vision-Language Models (VLMs) have garnered significant attention for their remarkable ability to interpret and generate multimodal content. However, securing these models against jailbreak attacks continues to be a substantial challenge. Unlike text-only models, VLMs integrate additional modalities, introducing novel vulnerabilities such as image hijacking, which can manipulate the model into producing inappropriate or harmful responses. Drawing inspiration from text-based jailbreaks like the "Do Anything Now" (DAN) command, this work introduces VisualDAN, a single adversarial image embedded with DAN-style commands. Specifically, we prepend harmful corpora with affirmative prefixes (e.g., "Sure, I can provide the guidance you need") to trick the model into responding positively to malicious queries. The adversarial image is then trained on these DAN-inspired harmful texts and transformed into the text domain to elicit malicious outputs. Extensive experiments on models such as MiniGPT-4, MiniGPT-v2, InstructBLIP, and LLaVA reveal that VisualDAN effectively bypasses the safeguards of aligned VLMs, forcing them to execute a broad range of harmful instructions that severely violate ethical standards. Our results further demonstrate that even a small amount of toxic content can significantly amplify harmful outputs once the model's defenses are compromised. These findings highlight the urgent need for robust defenses against image-based attacks and offer critical insights for future research into the alignment and security of VLMs.
Similar Papers
BackdoorVLM: A Benchmark for Backdoor Attacks on Vision-Language Models
CV and Pattern Recognition
Finds hidden tricks in AI that can fool it.
An Image Is Worth Ten Thousand Words: Verbose-Text Induction Attacks on VLMs
CV and Pattern Recognition
Makes AI talk too much, wasting time and money.
VERA-V: Variational Inference Framework for Jailbreaking Vision-Language Models
Cryptography and Security
Makes AI models with pictures and words unsafe.