Attention! You Vision Language Model Could Be Maliciously Manipulated
By: Xiaosen Wang , Shaokang Wang , Zhijin Ge and more
Potential Business Impact:
Makes AI see and follow bad instructions.
Large Vision-Language Models (VLMs) have achieved remarkable success in understanding complex real-world scenarios and supporting data-driven decision-making processes. However, VLMs exhibit significant vulnerability against adversarial examples, either text or image, which can lead to various adversarial outcomes, e.g., jailbreaking, hijacking, and hallucination, etc. In this work, we empirically and theoretically demonstrate that VLMs are particularly susceptible to image-based adversarial examples, where imperceptible perturbations can precisely manipulate each output token. To this end, we propose a novel attack called Vision-language model Manipulation Attack (VMA), which integrates first-order and second-order momentum optimization techniques with a differentiable transformation mechanism to effectively optimize the adversarial perturbation. Notably, VMA can be a double-edged sword: it can be leveraged to implement various attacks, such as jailbreaking, hijacking, privacy breaches, Denial-of-Service, and the generation of sponge examples, etc, while simultaneously enabling the injection of watermarks for copyright protection. Extensive empirical evaluations substantiate the efficacy and generalizability of VMA across diverse scenarios and datasets.
Similar Papers
Seeing the Threat: Vulnerabilities in Vision-Language Models to Adversarial Attack
Computation and Language
Makes AI safer from bad instructions.
Transferable Adversarial Attacks on Black-Box Vision-Language Models
CV and Pattern Recognition
Makes AI misinterpret pictures to trick it.
When Data Manipulation Meets Attack Goals: An In-depth Survey of Attacks for VLMs
CV and Pattern Recognition
Finds ways to trick smart computer eyes.