Score: 2

Patronus: Safeguarding Text-to-Image Models against White-Box Adversaries

Published: October 18, 2025 | arXiv ID: 2510.16581v1

By: Xinfeng Li , Shengyuan Pang , Jialin Wu and more

Potential Business Impact:

Stops AI from making bad pictures, even if tricked.

Business Areas:
Penetration Testing Information Technology, Privacy and Security

Text-to-image (T2I) models, though exhibiting remarkable creativity in image generation, can be exploited to produce unsafe images. Existing safety measures, e.g., content moderation or model alignment, fail in the presence of white-box adversaries who know and can adjust model parameters, e.g., by fine-tuning. This paper presents a novel defensive framework, named Patronus, which equips T2I models with holistic protection to defend against white-box adversaries. Specifically, we design an internal moderator that decodes unsafe input features into zero vectors while ensuring the decoding performance of benign input features. Furthermore, we strengthen the model alignment with a carefully designed non-fine-tunable learning mechanism, ensuring the T2I model will not be compromised by malicious fine-tuning. We conduct extensive experiments to validate the intactness of the performance on safe content generation and the effectiveness of rejecting unsafe content generation. Results also confirm the resilience of Patronus against various fine-tuning attacks by white-box adversaries.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ πŸ‡¨πŸ‡­ Singapore, China, Switzerland

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Cryptography and Security