Invariant-based Robust Weights Watermark for Large Language Models
By: Qingxiao Guo , Xinjie Zhu , Yilong Ma and more
Potential Business Impact:
Protects computer programs from being copied.
Watermarking technology has gained significant attention due to the increasing importance of intellectual property (IP) rights, particularly with the growing deployment of large language models (LLMs) on billions resource-constrained edge devices. To counter the potential threats of IP theft by malicious users, this paper introduces a robust watermarking scheme without retraining or fine-tuning for transformer models. The scheme generates a unique key for each user and derives a stable watermark value by solving linear constraints constructed from model invariants. Moreover, this technology utilizes noise mechanism to hide watermark locations in multi-user scenarios against collusion attack. This paper evaluates the approach on three popular models (Llama3, Phi3, Gemma), and the experimental results confirm the strong robustness across a range of attack methods (fine-tuning, pruning, quantization, permutation, scaling, reversible matrix and collusion attacks).
Similar Papers
Yet Another Watermark for Large Language Models
Cryptography and Security
Marks computer writing so you know it's real.
Yet Another Watermark for Large Language Models
Cryptography and Security
Marks AI writing so you know it's from a machine.
SoK: Are Watermarks in LLMs Ready for Deployment?
Cryptography and Security
Protects computer brains from being copied.