MTAttack: Multi-Target Backdoor Attacks against Large Vision-Language Models
By: Zihan Wang , Guansong Pang , Wenjun Miao and more
Potential Business Impact:
Makes AI models learn multiple bad tricks at once.
Recent advances in Large Visual Language Models (LVLMs) have demonstrated impressive performance across various vision-language tasks by leveraging large-scale image-text pretraining and instruction tuning. However, the security vulnerabilities of LVLMs have become increasingly concerning, particularly their susceptibility to backdoor attacks. Existing backdoor attacks focus on single-target attacks, i.e., targeting a single malicious output associated with a specific trigger. In this work, we uncover multi-target backdoor attacks, where multiple independent triggers corresponding to different attack targets are added in a single pass of training, posing a greater threat to LVLMs in real-world applications. Executing such attacks in LVLMs is challenging since there can be many incorrect trigger-target mappings due to severe feature interference among different triggers. To address this challenge, we propose MTAttack, the first multi-target backdoor attack framework for enforcing accurate multiple trigger-target mappings in LVLMs. The core of MTAttack is a novel optimization method with two constraints, namely Proxy Space Partitioning constraint and Trigger Prototype Anchoring constraint. It jointly optimizes multiple triggers in the latent space, with each trigger independently mapping clean images to a unique proxy class while at the same time guaranteeing their separability. Experiments on popular benchmarks demonstrate a high success rate of MTAttack for multi-target attacks, substantially outperforming existing attack methods. Furthermore, our attack exhibits strong generalizability across datasets and robustness against backdoor defense strategies. These findings highlight the vulnerability of LVLMs to multi-target backdoor attacks and underscore the urgent need for mitigating such threats. Code is available at https://github.com/mala-lab/MTAttack.
Similar Papers
BackdoorVLM: A Benchmark for Backdoor Attacks on Vision-Language Models
CV and Pattern Recognition
Finds hidden tricks in AI that can fool it.
Robust Anti-Backdoor Instruction Tuning in LVLMs
Cryptography and Security
Protects AI from hidden tricks in its training.
TabVLA: Targeted Backdoor Attacks on Vision-Language-Action Models
Cryptography and Security
Makes robots do bad things when tricked.