Score: 0

From Evaluation to Defense: Constructing Persistent Edit-Based Fingerprints for Large Language Models

Published: September 3, 2025 | arXiv ID: 2509.03122v1

By: Yue Li , Xin Yi , Dongsheng Shi and more

Potential Business Impact:

Protects AI brains from being copied.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The intellectual property (IP) protection of Large Language Models (LLMs) is increasingly critical. Injecting specialized fingerprints into LLMs through instruction tuning is a common IP protection technique. However, this may significantly degrade model performance, requires substantial computational resources, and exhibits poor persistence under model modifications. We argue that knowledge editing offers a lightweight alternative that is more suitable for fingerprint injection. Accordingly, we apply knowledge editing to fingerprint injection for the first time and demonstrate its strong capability. Despite using scrambled text as fingerprints to prevent them from being overwritten during fine-tuning, degradation still occurs under large-scale fine-tuning. To address this, we propose Fingerprint Subspace-aware Fine-Tuning (FSFT), which reduces fingerprint degradation by constraining the update of the fingerprint subspace. The performance of FSFT exceeds fine-tuning by 10% even in the worst-case scenario. Additionally, we observe that the fingerprint-injected models struggle to distinguish between fingerprints and similar texts due to the high similarity of their features. This finding underscores the urgent need for more robust and fine-grained fingerprinting injection methods for LLMs.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Computation and Language