SWaRL: Safeguard Code Watermarking via Reinforcement Learning
By: Neusha Javidnia , Ruisi Zhang , Ashish Kundu and more
Potential Business Impact:
Protects computer code from being stolen.
We present SWaRL, a robust and fidelity-preserving watermarking framework designed to protect the intellectual property of code LLM owners by embedding unique and verifiable signatures in the generated output. Existing approaches rely on manually crafted transformation rules to preserve watermarked code functionality or manipulate token-generation probabilities at inference time, which are prone to compilation errors. To address these challenges, SWaRL employs a reinforcement learning-based co-training framework that uses compiler feedback for functional correctness and a jointly trained confidential verifier as a reward signal to maintain watermark detectability. Furthermore, SWaRL employs low-rank adaptation (LoRA) during fine-tuning, allowing the learned watermark information to be transferable across model updates. Extensive experiments show that SWaRL achieves higher watermark detection accuracy compared to prior methods while fully maintaining watermarked code functionality. The LoRA-based signature embedding steers the base model to generate and solve code in a watermark-specific manner without significant computational overhead. Moreover, SWaRL exhibits strong resilience against refactoring and adversarial transformation attacks.
Similar Papers
AuthenLoRA: Entangling Stylization with Imperceptible Watermarks for Copyright-Secure LoRA Adapters
Cryptography and Security
Marks AI art so you know who made it.
Optimizing Token Choice for Code Watermarking: A RL Approach
Cryptography and Security
Finds fake computer code made by AI.
SEAL: Entangled White-box Watermarks on Low-Rank Adaptation
Artificial Intelligence
Protects AI art from being stolen.