Score: 0

Adversarial Confusion Attack: Disrupting Multimodal Large Language Models

Published: November 25, 2025 | arXiv ID: 2511.20494v2

By: Jakub Hoscilowicz, Artur Janicki

Potential Business Impact:

Makes AI models confidently give wrong answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We introduce the Adversarial Confusion Attack, a new class of threats against multimodal large language models (MLLMs). Unlike jailbreaks or targeted misclassification, the goal is to induce systematic disruption that makes the model generate incoherent or confidently incorrect outputs. Applications include embedding adversarial images into websites to prevent MLLM-powered agents from operating reliably. The proposed attack maximizes next-token entropy using a small ensemble of open-source MLLMs. In the white-box setting, we show that a single adversarial image can disrupt all models in the ensemble, both in the full-image and adversarial CAPTCHA settings. Despite relying on a basic adversarial technique (PGD), the attack generates perturbations that transfer to both unseen open-source (e.g., Qwen3-VL) and proprietary (e.g., GPT-5.1) models.

Page Count
10 pages

Category
Computer Science:
Computation and Language