Vulnerability-Aware Robust Multimodal Adversarial Training
By: Junrui Zhang , Xinyu Zhao , Jie Peng and more
Potential Business Impact:
Makes AI smarter and safer from tricks.
Multimodal learning has shown significant superiority on various tasks by integrating multiple modalities. However, the interdependencies among modalities increase the susceptibility of multimodal models to adversarial attacks. Existing methods mainly focus on attacks on specific modalities or indiscriminately attack all modalities. In this paper, we find that these approaches ignore the differences between modalities in their contribution to final robustness, resulting in suboptimal robustness performance. To bridge this gap, we introduce Vulnerability-Aware Robust Multimodal Adversarial Training (VARMAT), a probe-in-training adversarial training method that improves multimodal robustness by identifying the vulnerability of each modality. To be specific, VARMAT first explicitly quantifies the vulnerability of each modality, grounded in a first-order approximation of the attack objective (Probe). Then, we propose a targeted regularization term that penalizes modalities with high vulnerability, guiding robust learning while maintaining task accuracy (Training). We demonstrate the enhanced robustness of our method across multiple multimodal datasets involving diverse modalities. Finally, we achieve {12.73%, 22.21%, 11.19%} robustness improvement on three multimodal datasets, revealing a significant blind spot in multimodal adversarial training.
Similar Papers
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Machine Learning (CS)
Protects smart AI from being tricked.
Investigating Vulnerabilities and Defenses Against Audio-Visual Attacks: A Comprehensive Survey Emphasizing Multimodal Models
Cryptography and Security
Makes AI that sees and hears unsafe.
Survey of Adversarial Robustness in Multimodal Large Language Models
CV and Pattern Recognition
Makes AI understand pictures and words safely.