Attacking Attention of Foundation Models Disrupts Downstream Tasks
By: Hondamunige Prasanna Silva, Federico Becattini, Lorenzo Seidenari
Potential Business Impact:
Makes AI models safer from being tricked.
Foundation models represent the most prominent and recent paradigm shift in artificial intelligence. Foundation models are large models, trained on broad data that deliver high accuracy in many downstream tasks, often without fine-tuning. For this reason, models such as CLIP , DINO or Vision Transfomers (ViT), are becoming the bedrock of many industrial AI-powered applications. However, the reliance on pre-trained foundation models also introduces significant security concerns, as these models are vulnerable to adversarial attacks. Such attacks involve deliberately crafted inputs designed to deceive AI systems, jeopardizing their reliability. This paper studies the vulnerabilities of vision foundation models, focusing specifically on CLIP and ViTs, and explores the transferability of adversarial attacks to downstream tasks. We introduce a novel attack, targeting the structure of transformer-based architectures in a task-agnostic fashion. We demonstrate the effectiveness of our attack on several downstream tasks: classification, captioning, image/text retrieval, segmentation and depth estimation. Code available at:https://github.com/HondamunigePrasannaSilva/attack-attention
Similar Papers
Task-Agnostic Attacks Against Vision Foundation Models
CV and Pattern Recognition
Makes AI models safer for many different jobs.
From Pretrain to Pain: Adversarial Vulnerability of Video Foundation Models Without Task Knowledge
CV and Pattern Recognition
Makes AI video models easily tricked.
Vision Transformers: the threat of realistic adversarial patches
CV and Pattern Recognition
Tricks AI into seeing people when they aren't there.