Unveiling the Attribute Misbinding Threat in Identity-Preserving Models
By: Junming Fu , Jishen Zeng , Yi Jiang and more
Potential Business Impact:
Tricks AI to make bad pictures of people.
Identity-preserving models have led to notable progress in generating personalized content. Unfortunately, such models also exacerbate risks when misused, for instance, by generating threatening content targeting specific individuals. This paper introduces the \textbf{Attribute Misbinding Attack}, a novel method that poses a threat to identity-preserving models by inducing them to produce Not-Safe-For-Work (NSFW) content. The attack's core idea involves crafting benign-looking textual prompts to circumvent text-filter safeguards and leverage a key model vulnerability: flawed attribute binding that stems from its internal attention bias. This results in misattributing harmful descriptions to a target identity and generating NSFW outputs. To facilitate the study of this attack, we present the \textbf{Misbinding Prompt} evaluation set, which examines the content generation risks of current state-of-the-art identity-preserving models across four risk dimensions: pornography, violence, discrimination, and illegality. Additionally, we introduce the \textbf{Attribute Binding Safety Score (ABSS)}, a metric for concurrently assessing both content fidelity and safety compliance. Experimental results show that our Misbinding Prompt evaluation set achieves a \textbf{5.28}\% higher success rate in bypassing five leading text filters (including GPT-4o) compared to existing main-stream evaluation sets, while also demonstrating the highest proportion of NSFW content generation. The proposed ABSS metric enables a more comprehensive evaluation of identity-preserving models by concurrently assessing both content fidelity and safety compliance.
Similar Papers
Security Risk of Misalignment between Text and Image in Multi-modal Model
CV and Pattern Recognition
Makes AI create bad pictures even with good words.
Benchmarking Misuse Mitigation Against Covert Adversaries
Cryptography and Security
Helps AI avoid helping bad guys do bad things.
Exposing Hidden Biases in Text-to-Image Models via Automated Prompt Search
Machine Learning (CS)
Finds hidden unfairness in AI art.