VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models
By: Hanene F. Z. Brachemi Meftah , Wassim Hamidouche , Sid Ahmed Fezza and more
Potential Business Impact:
Hides private parts of pictures from smart AI.
Recent years have witnessed remarkable progress in developing Vision-Language Models (VLMs) capable of processing both textual and visual inputs. These models have demonstrated impressive performance, leading to their widespread adoption in various applications. However, this widespread raises serious concerns regarding user privacy, particularly when models inadvertently process or expose private visual information. In this work, we frame the preservation of privacy in VLMs as an adversarial attack problem. We propose a novel attack strategy that selectively conceals information within designated Region Of Interests (ROIs) in an image, effectively preventing VLMs from accessing sensitive content while preserving the semantic integrity of the remaining image. Unlike conventional adversarial attacks that often disrupt the entire image, our method maintains high coherence in unmasked areas. Experimental results across three state-of-the-art VLMs namely LLaVA, Instruct-BLIP, and BLIP2-T5 demonstrate up to 98% reduction in detecting targeted ROIs, while maintaining global image semantics intact, as confirmed by high similarity scores between clean and adversarial outputs. We believe that this work contributes to a more privacy conscious use of multimodal models and offers a practical tool for further research, with the source code publicly available at: https://github.com/hbrachemi/Vlm_defense-attack.
Similar Papers
Who Can See Through You? Adversarial Shielding Against VLM-Based Attribute Inference Attacks
CV and Pattern Recognition
Protects your photos from being spied on by AI.
Zero-shot image privacy classification with Vision-Language Models
CV and Pattern Recognition
Makes computers better at guessing private pictures.
Transferable Adversarial Attacks on Black-Box Vision-Language Models
CV and Pattern Recognition
Makes AI misinterpret pictures to trick it.