Large Language Models Are Effective Code Watermarkers
By: Rui Xu , Jiawei Chen , Zhaoxia Yin and more
Potential Business Impact:
Tags code to prove who wrote it.
The widespread use of large language models (LLMs) and open-source code has raised ethical and security concerns regarding the distribution and attribution of source code, including unauthorized redistribution, license violations, and misuse of code for malicious purposes. Watermarking has emerged as a promising solution for source attribution, but existing techniques rely heavily on hand-crafted transformation rules, abstract syntax tree (AST) manipulation, or task-specific training, limiting their scalability and generality across languages. Moreover, their robustness against attacks remains limited. To address these limitations, we propose CodeMark-LLM, an LLM-driven watermarking framework that embeds watermark into source code without compromising its semantics or readability. CodeMark-LLM consists of two core components: (i) Semantically Consistent Embedding module that applies functionality-preserving transformations to encode watermark bits, and (ii) Differential Comparison Extraction module that identifies the applied transformations by comparing the original and watermarked code. Leveraging the cross-lingual generalization ability of LLM, CodeMark-LLM avoids language-specific engineering and training pipelines. Extensive experiments across diverse programming languages and attack scenarios demonstrate its robustness, effectiveness, and scalability.
Similar Papers
Yet Another Watermark for Large Language Models
Cryptography and Security
Marks computer writing so you know it's real.
Yet Another Watermark for Large Language Models
Cryptography and Security
Marks AI writing so you know it's from a machine.
EditMark: Watermarking Large Language Models based on Model Editing
Cryptography and Security
Marks AI writing to prove it's yours.