From Evaluation to Defense: Constructing Persistent Edit-Based Fingerprints for Large Language Models
By: Yue Li , Xin Yi , Dongsheng Shi and more
Potential Business Impact:
Protects AI brains from being copied.
The intellectual property (IP) protection of Large Language Models (LLMs) is increasingly critical. Injecting specialized fingerprints into LLMs through instruction tuning is a common IP protection technique. However, this may significantly degrade model performance, requires substantial computational resources, and exhibits poor persistence under model modifications. We argue that knowledge editing offers a lightweight alternative that is more suitable for fingerprint injection. Accordingly, we apply knowledge editing to fingerprint injection for the first time and demonstrate its strong capability. Despite using scrambled text as fingerprints to prevent them from being overwritten during fine-tuning, degradation still occurs under large-scale fine-tuning. To address this, we propose Fingerprint Subspace-aware Fine-Tuning (FSFT), which reduces fingerprint degradation by constraining the update of the fingerprint subspace. The performance of FSFT exceeds fine-tuning by 10% even in the worst-case scenario. Additionally, we observe that the fingerprint-injected models struggle to distinguish between fingerprints and similar texts due to the high similarity of their features. This finding underscores the urgent need for more robust and fine-grained fingerprinting injection methods for LLMs.
Similar Papers
FPEdit: Robust LLM Fingerprinting through Localized Knowledge Editing
Cryptography and Security
Protects AI from being stolen or copied.
EditMF: Drawing an Invisible Fingerprint for Your Large Language Models
Cryptography and Security
Protects AI secrets by hiding ownership codes.
SoK: Large Language Model Copyright Auditing via Fingerprinting
Cryptography and Security
Protects AI from being copied or stolen.