Score: 0

Seeing the Threat: Vulnerabilities in Vision-Language Models to Adversarial Attack

Published: May 28, 2025 | arXiv ID: 2505.21967v1

By: Juan Ren, Mark Dras, Usman Naseem

Potential Business Impact:

Makes AI safer from bad instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Vision-Language Models (LVLMs) have shown remarkable capabilities across a wide range of multimodal tasks. However, their integration of visual inputs introduces expanded attack surfaces, thereby exposing them to novel security vulnerabilities. In this work, we conduct a systematic representational analysis to uncover why conventional adversarial attacks can circumvent the safety mechanisms embedded in LVLMs. We further propose a novel two stage evaluation framework for adversarial attacks on LVLMs. The first stage differentiates among instruction non compliance, outright refusal, and successful adversarial exploitation. The second stage quantifies the degree to which the model's output fulfills the harmful intent of the adversarial prompt, while categorizing refusal behavior into direct refusals, soft refusals, and partial refusals that remain inadvertently helpful. Finally, we introduce a normative schema that defines idealized model behavior when confronted with harmful prompts, offering a principled target for safety alignment in multimodal systems.

Country of Origin
🇦🇺 Australia

Page Count
15 pages

Category
Computer Science:
Computation and Language