DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation
By: Hyeseon Ahn, Shinwoo Park, Yo-Sub Han
Potential Business Impact:
Makes AI text fake watermarks to trick people.
The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this assumption is dangerously flawed. We introduce the threat of watermark spoofing, a sophisticated attack that allows a malicious model to generate text containing the authentic-looking watermark of a trusted, victim model. This enables the seamless misattribution of harmful content, such as disinformation, to reputable sources. The key to our attack is repurposing watermark radioactivity, the unintended inheritance of data patterns during fine-tuning, from a discoverable trait into an attack vector. By distilling knowledge from a watermarked teacher model, our framework allows an attacker to steal and replicate the watermarking signal of the victim model. This work reveals a critical security gap in text authorship verification and calls for a paradigm shift towards technologies capable of distinguishing authentic watermarks from expertly imitated ones. Our code is available at https://github.com/hsannn/ditto.git.
Similar Papers
Unified attacks to large language model watermarks: spoofing and scrubbing in unauthorized knowledge distillation
Computation and Language
Makes AI models reveal if they copied others.
Defending LLM Watermarking Against Spoofing Attacks with Contrastive Representation Learning
Cryptography and Security
Stops bad people from changing AI text meaning.
Leave No TRACE: Black-box Detection of Copyrighted Dataset Usage in Large Language Models via Watermarking
Computation and Language
Protects writing from being copied by AI.