Score: 2

Attack-Resistant Watermarking for AIGC Image Forensics via Diffusion-based Semantic Deflection

Published: January 10, 2026 | arXiv ID: 2601.06639v1

By: Qingyu Liu , Yitao Zhang , Zhongjie Ba and more

Potential Business Impact:

Marks AI art to prove who made it.

Business Areas:
Image Recognition Data and Analytics, Software

Protecting the copyright of user-generated AI images is an emerging challenge as AIGC becomes pervasive in creative workflows. Existing watermarking methods (1) remain vulnerable to real-world adversarial threats, often forced to trade off between defenses against spoofing and removal attacks; and (2) cannot support semantic-level tamper localization. We introduce PAI, a training-free inherent watermarking framework for AIGC copyright protection, plug-and-play with diffusion-based AIGC services. PAI simultaneously provides three key functionalities: robust ownership verification, attack detection, and semantic-level tampering localization. Unlike existing inherent watermark methods that only embed watermarks at noise initialization of diffusion models, we design a novel key-conditioned deflection mechanism that subtly steers the denoising trajectory according to the user key. Such trajectory-level coupling further strengthens the semantic entanglement of identity and content, thereby further enhancing robustness against real-world threats. Moreover, we also provide a theoretical analysis proving that only the valid key can pass verification. Experiments across 12 attack methods show that PAI achieves 98.43\% verification accuracy, improving over SOTA methods by 37.25\% on average, and retains strong tampering localization performance even against advanced AIGC edits. Our code is available at https://github.com/QingyuLiu/PAI.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
Cryptography and Security