BadViM: Backdoor Attack against Vision Mamba
By: Yinghao Wu, Liyan Zhang
Potential Business Impact:
Makes AI models easily tricked by hidden codes.
Vision State Space Models (SSMs), particularly architectures like Vision Mamba (ViM), have emerged as promising alternatives to Vision Transformers (ViTs). However, the security implications of this novel architecture, especially their vulnerability to backdoor attacks, remain critically underexplored. Backdoor attacks aim to embed hidden triggers into victim models, causing the model to misclassify inputs containing these triggers while maintaining normal behavior on clean inputs. This paper investigates the susceptibility of ViM to backdoor attacks by introducing BadViM, a novel backdoor attack framework specifically designed for Vision Mamba. The proposed BadViM leverages a Resonant Frequency Trigger (RFT) that exploits the frequency sensitivity patterns of the victim model to create stealthy, distributed triggers. To maximize attack efficacy, we propose a Hidden State Alignment loss that strategically manipulates the internal representations of model by aligning the hidden states of backdoor images with those of target classes. Extensive experimental results demonstrate that BadViM achieves superior attack success rates while maintaining clean data accuracy. Meanwhile, BadViM exhibits remarkable resilience against common defensive measures, including PatchDrop, PatchShuffle and JPEG compression, which typically neutralize normal backdoor attacks.
Similar Papers
A Separable Self-attention Inspired by the State Space Model for Computer Vision
CV and Pattern Recognition
Makes computers see pictures faster and better.
Backdoor Attacks on Prompt-Driven Video Segmentation Foundation Models
CV and Pattern Recognition
Makes AI models for videos easily tricked.
A Survey on Mamba Architecture for Vision Applications
CV and Pattern Recognition
Makes computers see and understand pictures better.