Dual Attention Guided Defense Against Malicious Edits
By: Jie Zhang , Shuai Dong , Shiguang Shan and more
Potential Business Impact:
Stops AI from making fake pictures from words.
Recent progress in text-to-image diffusion models has transformed image editing via text prompts, yet this also introduces significant ethical challenges from potential misuse in creating deceptive or harmful content. While current defenses seek to mitigate this risk by embedding imperceptible perturbations, their effectiveness is limited against malicious tampering. To address this issue, we propose a Dual Attention-Guided Noise Perturbation (DANP) immunization method that adds imperceptible perturbations to disrupt the model's semantic understanding and generation process. DANP functions over multiple timesteps to manipulate both cross-attention maps and the noise prediction process, using a dynamic threshold to generate masks that identify text-relevant and irrelevant regions. It then reduces attention in relevant areas while increasing it in irrelevant ones, thereby misguides the edit towards incorrect regions and preserves the intended targets. Additionally, our method maximizes the discrepancy between the injected noise and the model's predicted noise to further interfere with the generation. By targeting both attention and noise prediction mechanisms, DANP exhibits impressive immunity against malicious edits, and extensive experiments confirm that our method achieves state-of-the-art performance.
Similar Papers
Towards Transferable Defense Against Malicious Image Edits
CV and Pattern Recognition
Stops bad edits from changing pictures.
Immunizing Images from Text to Image Editing via Adversarial Cross-Attention
CV and Pattern Recognition
Makes AI image editing fooled by fake descriptions.
Single-Reference Text-to-Image Manipulation with Dual Contrastive Denoising Score
CV and Pattern Recognition
Edits photos using words, keeping original look.