Score: 0

EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models

Published: April 11, 2025 | arXiv ID: 2504.08205v1

By: Minjae Seo , Myoungsung You , Junhee Lee and more

Potential Business Impact:

Makes computer vision systems use way more power.

Business Areas:
Image Recognition Data and Analytics, Software

Vision models are increasingly deployed in critical applications such as autonomous driving and CCTV monitoring, yet they remain susceptible to resource-consuming attacks. In this paper, we introduce a novel energy-overloading attack that leverages vision language model (VLM) prompts to generate adversarial images targeting vision models. These images, though imperceptible to the human eye, significantly increase GPU energy consumption across various vision models, threatening the availability of these systems. Our framework, EO-VLM (Energy Overload via VLM), is model-agnostic, meaning it is not limited by the architecture or type of the target vision model. By exploiting the lack of safety filters in VLMs like DALL-E 3, we create adversarial noise images without requiring prior knowledge or internal structure of the target vision models. Our experiments demonstrate up to a 50% increase in energy consumption, revealing a critical vulnerability in current vision models.

Page Count
2 pages

Category
Computer Science:
CV and Pattern Recognition