Robustness in Both Domains: CLIP Needs a Robust Text Encoder
By: Elias Abad Rocamora , Christian Schlarmann , Naman Deep Singh and more
Potential Business Impact:
Makes AI understand words better, even when tricked.
Adversarial input attacks can cause a significant shift of CLIP embeddings. This can affect the downstream robustness of models incorporating CLIP in the pipeline, such as text-to-image generative models or large vision language models. While some efforts have been done towards making the CLIP image encoders robust, the robustness of text encoders remains unexplored. In this work, we cover this gap in the literature. We propose LEAF: an efficient adversarial finetuning method for the text domain, with the ability to scale to large CLIP models. Our models significantly improve the zero-shot adversarial accuracy in the text domain, while maintaining the vision performance provided by robust image encoders. When combined with text-to-image diffusion models, we can improve the generation quality under adversarial noise. When employing our robust CLIP encoders in multimodal retrieval tasks, we improve the recall under adversarial noise over standard CLIP models. Finally, we show that robust text encoders facilitate better reconstruction of input text from its embedding via direct optimization.
Similar Papers
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIP
CV and Pattern Recognition
Protects AI from being tricked by fake pictures.
LeakyCLIP: Extracting Training Data from CLIP
Cryptography and Security
Steals private pictures from AI's memory.
LeakyCLIP: Extracting Training Data from CLIP
Cryptography and Security
Steals private pictures from AI's memory.