Patronus: Safeguarding Text-to-Image Models against White-Box Adversaries
By: Xinfeng Li , Shengyuan Pang , Jialin Wu and more
Potential Business Impact:
Stops AI from making bad pictures, even if tricked.
Text-to-image (T2I) models, though exhibiting remarkable creativity in image generation, can be exploited to produce unsafe images. Existing safety measures, e.g., content moderation or model alignment, fail in the presence of white-box adversaries who know and can adjust model parameters, e.g., by fine-tuning. This paper presents a novel defensive framework, named Patronus, which equips T2I models with holistic protection to defend against white-box adversaries. Specifically, we design an internal moderator that decodes unsafe input features into zero vectors while ensuring the decoding performance of benign input features. Furthermore, we strengthen the model alignment with a carefully designed non-fine-tunable learning mechanism, ensuring the T2I model will not be compromised by malicious fine-tuning. We conduct extensive experiments to validate the intactness of the performance on safe content generation and the effectiveness of rejecting unsafe content generation. Results also confirm the resilience of Patronus against various fine-tuning attacks by white-box adversaries.
Similar Papers
Patronus: Identifying and Mitigating Transferable Backdoors in Pre-trained Language Models
Cryptography and Security
Stops bad code from tricking smart computer programs.
PLA: Prompt Learning Attack against Text-to-Image Generative Models
Cryptography and Security
Makes AI create forbidden pictures.
GenBreak: Red Teaming Text-to-Image Generators Using Large Language Models
Cryptography and Security
Finds ways to make AI create bad pictures.