SoK: Large Language Model Copyright Auditing via Fingerprinting
By: Shuo Shao , Yiming Li , Yu He and more
Potential Business Impact:
Protects AI from being copied or stolen.
The broad capabilities and substantial resources required to train Large Language Models (LLMs) make them valuable intellectual property, yet they remain vulnerable to copyright infringement, such as unauthorized use and model theft. LLM fingerprinting, a non-intrusive technique that extracts and compares the distinctive features from LLMs to identify infringements, offers a promising solution to copyright auditing. However, its reliability remains uncertain due to the prevalence of diverse model modifications and the lack of standardized evaluation. In this SoK, we present the first comprehensive study of LLM fingerprinting. We introduce a unified framework and formal taxonomy that categorizes existing methods into white-box and black-box approaches, providing a structured overview of the state of the art. We further propose LeaFBench, the first systematic benchmark for evaluating LLM fingerprinting under realistic deployment scenarios. Built upon mainstream foundation models and comprising 149 distinct model instances, LeaFBench integrates 13 representative post-development techniques, spanning both parameter-altering methods (e.g., fine-tuning, quantization) and parameter-independent mechanisms (e.g., system prompts, RAG). Extensive experiments on LeaFBench reveal the strengths and weaknesses of existing methods, thereby outlining future research directions and critical open problems in this emerging field. The code is available at https://github.com/shaoshuo-ss/LeaFBench.
Similar Papers
Copyright Protection for Large Language Models: A Survey of Methods, Challenges, and Trends
Cryptography and Security
Protects smart computer programs from being copied.
Attacks and Defenses Against LLM Fingerprinting
Cryptography and Security
Stops hackers from guessing which AI made a text.
EditMF: Drawing an Invisible Fingerprint for Your Large Language Models
Cryptography and Security
Protects AI secrets by hiding ownership codes.