Score: 2

Vulnerability-Aware Robust Multimodal Adversarial Training

Published: November 22, 2025 | arXiv ID: 2511.18138v1

By: Junrui Zhang , Xinyu Zhao , Jie Peng and more

Potential Business Impact:

Makes AI smarter and safer from tricks.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Multimodal learning has shown significant superiority on various tasks by integrating multiple modalities. However, the interdependencies among modalities increase the susceptibility of multimodal models to adversarial attacks. Existing methods mainly focus on attacks on specific modalities or indiscriminately attack all modalities. In this paper, we find that these approaches ignore the differences between modalities in their contribution to final robustness, resulting in suboptimal robustness performance. To bridge this gap, we introduce Vulnerability-Aware Robust Multimodal Adversarial Training (VARMAT), a probe-in-training adversarial training method that improves multimodal robustness by identifying the vulnerability of each modality. To be specific, VARMAT first explicitly quantifies the vulnerability of each modality, grounded in a first-order approximation of the attack objective (Probe). Then, we propose a targeted regularization term that penalizes modalities with high vulnerability, guiding robust learning while maintaining task accuracy (Training). We demonstrate the enhanced robustness of our method across multiple multimodal datasets involving diverse modalities. Finally, we achieve {12.73%, 22.21%, 11.19%} robustness improvement on three multimodal datasets, revealing a significant blind spot in multimodal adversarial training.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ China, United States

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)