Inhibitory Attacks on Backdoor-based Fingerprinting for Large Language Models
By: Hang Fu , Wanli Peng , Yinghan Zhou and more
Potential Business Impact:
Stops people from stealing smart computer language.
The widespread adoption of Large Language Model (LLM) in commercial and research settings has intensified the need for robust intellectual property protection. Backdoor-based LLM fingerprinting has emerged as a promising solution for this challenge. In practical application, the low-cost multi-model collaborative technique, LLM ensemble, combines diverse LLMs to leverage their complementary strengths, garnering significant attention and practical adoption. Unfortunately, the vulnerability of existing LLM fingerprinting for the ensemble scenario is unexplored. In order to comprehensively assess the robustness of LLM fingerprinting, in this paper, we propose two novel fingerprinting attack methods: token filter attack (TFA) and sentence verification attack (SVA). The TFA gets the next token from a unified set of tokens created by the token filter mechanism at each decoding step. The SVA filters out fingerprint responses through a sentence verification mechanism based on perplexity and voting. Experimentally, the proposed methods effectively inhibit the fingerprint response while maintaining ensemble performance. Compared with state-of-the-art attack methods, the proposed method can achieve better performance. The findings necessitate enhanced robustness in LLM fingerprinting.
Similar Papers
Attacks and Defenses Against LLM Fingerprinting
Cryptography and Security
Stops hackers from guessing which AI made a text.
SoK: Large Language Model Copyright Auditing via Fingerprinting
Cryptography and Security
Protects AI from being copied or stolen.
EditMF: Drawing an Invisible Fingerprint for Your Large Language Models
Cryptography and Security
Protects AI secrets by hiding ownership codes.