Score: 2

VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models

Published: July 11, 2025 | arXiv ID: 2507.08982v1

By: Hanene F. Z. Brachemi Meftah , Wassim Hamidouche , Sid Ahmed Fezza and more

Potential Business Impact:

Hides private parts of pictures from smart AI.

Business Areas:
Image Recognition Data and Analytics, Software

Recent years have witnessed remarkable progress in developing Vision-Language Models (VLMs) capable of processing both textual and visual inputs. These models have demonstrated impressive performance, leading to their widespread adoption in various applications. However, this widespread raises serious concerns regarding user privacy, particularly when models inadvertently process or expose private visual information. In this work, we frame the preservation of privacy in VLMs as an adversarial attack problem. We propose a novel attack strategy that selectively conceals information within designated Region Of Interests (ROIs) in an image, effectively preventing VLMs from accessing sensitive content while preserving the semantic integrity of the remaining image. Unlike conventional adversarial attacks that often disrupt the entire image, our method maintains high coherence in unmasked areas. Experimental results across three state-of-the-art VLMs namely LLaVA, Instruct-BLIP, and BLIP2-T5 demonstrate up to 98% reduction in detecting targeted ROIs, while maintaining global image semantics intact, as confirmed by high similarity scores between clean and adversarial outputs. We believe that this work contributes to a more privacy conscious use of multimodal models and offers a practical tool for further research, with the source code publicly available at: https://github.com/hbrachemi/Vlm_defense-attack.

Country of Origin
🇫🇷 France

Repos / Data Links

Page Count
13 pages

Category
Electrical Engineering and Systems Science:
Image and Video Processing