Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
By: Jakub Hoscilowicz, Artur Janicki
Potential Business Impact:
Makes AI models confidently give wrong answers.
We introduce the Adversarial Confusion Attack, a new class of threats against multimodal large language models (MLLMs). Unlike jailbreaks or targeted misclassification, the goal is to induce systematic disruption that makes the model generate incoherent or confidently incorrect outputs. Applications include embedding adversarial images into websites to prevent MLLM-powered agents from operating reliably. The proposed attack maximizes next-token entropy using a small ensemble of open-source MLLMs. In the white-box setting, we show that a single adversarial image can disrupt all models in the ensemble, both in the full-image and adversarial CAPTCHA settings. Despite relying on a basic adversarial technique (PGD), the attack generates perturbations that transfer to both unseen open-source (e.g., Qwen3-VL) and proprietary (e.g., GPT-5.1) models.
Similar Papers
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Computation and Language
Makes smart AI systems confidently give wrong answers.
destroR: Attacking Transfer Models with Obfuscous Examples to Discard Perplexity
Computation and Language
Tricks smart computer programs to make mistakes.
A Generative Adversarial Approach to Adversarial Attacks Guided by Contrastive Language-Image Pre-trained Model
CV and Pattern Recognition
Makes AI fooled by tiny, hidden changes.