Optimal Detection for Language Watermarks with Pseudorandom Collision
By: T. Tony Cai , Xiang Li , Qi Long and more
Potential Business Impact:
Finds hidden messages in computer writing.
Text watermarking plays a crucial role in ensuring the traceability and accountability of large language model (LLM) outputs and mitigating misuse. While promising, most existing methods assume perfect pseudorandomness. In practice, repetition in generated text induces collisions that create structured dependence, compromising Type I error control and invalidating standard analyses. We introduce a statistical framework that captures this structure through a hierarchical two-layer partition. At its core is the concept of minimal units -- the smallest groups treatable as independent across units while permitting dependence within. Using minimal units, we define a non-asymptotic efficiency measure and cast watermark detection as a minimax hypothesis testing problem. Applied to Gumbel-max and inverse-transform watermarks, our framework produces closed-form optimal rules. It explains why discarding repeated statistics often improves performance and shows that within-unit dependence must be addressed unless degenerate. Both theory and experiments confirm improved detection power with rigorous Type I error control. These results provide the first principled foundation for watermark detection under imperfect pseudorandomness, offering both theoretical insight and practical guidance for reliable tracing of model outputs.
Similar Papers
Improving Detection of Watermarked Language Models
Computation and Language
Finds fake AI writing by combining methods.
Adaptive Testing for Segmenting Watermarked Texts From Language Models
Machine Learning (Stat)
Finds hidden computer writing in any text.
Optimized Couplings for Watermarking Large Language Models
Cryptography and Security
Detects if computers wrote text.