Score: 0

Invariant-based Robust Weights Watermark for Large Language Models

Published: July 11, 2025 | arXiv ID: 2507.08288v1

By: Qingxiao Guo , Xinjie Zhu , Yilong Ma and more

Potential Business Impact:

Protects computer programs from being copied.

Business Areas:
Text Analytics Data and Analytics, Software

Watermarking technology has gained significant attention due to the increasing importance of intellectual property (IP) rights, particularly with the growing deployment of large language models (LLMs) on billions resource-constrained edge devices. To counter the potential threats of IP theft by malicious users, this paper introduces a robust watermarking scheme without retraining or fine-tuning for transformer models. The scheme generates a unique key for each user and derives a stable watermark value by solving linear constraints constructed from model invariants. Moreover, this technology utilizes noise mechanism to hide watermark locations in multi-user scenarios against collusion attack. This paper evaluates the approach on three popular models (Llama3, Phi3, Gemma), and the experimental results confirm the strong robustness across a range of attack methods (fine-tuning, pruning, quantization, permutation, scaling, reversible matrix and collusion attacks).

Page Count
22 pages

Category
Computer Science:
Cryptography and Security