The Challenge of Identifying the Origin of Black-Box Large Language Models
By: Ziqing Yang , Yixin Wu , Yun Shen and more
Potential Business Impact:
Finds who copied smart computer programs.
The tremendous commercial potential of large language models (LLMs) has heightened concerns about their unauthorized use. Third parties can customize LLMs through fine-tuning and offer only black-box API access, effectively concealing unauthorized usage and complicating external auditing processes. This practice not only exacerbates unfair competition, but also violates licensing agreements. In response, identifying the origin of black-box LLMs is an intrinsic solution to this issue. In this paper, we first reveal the limitations of state-of-the-art passive and proactive identification methods with experiments on 30 LLMs and two real-world black-box APIs. Then, we propose the proactive technique, PlugAE, which optimizes adversarial token embeddings in a continuous space and proactively plugs them into the LLM for tracing and identification. The experiments show that PlugAE can achieve substantial improvement in identifying fine-tuned derivatives. We further advocate for legal frameworks and regulations to better address the challenges posed by the unauthorized use of LLMs.
Similar Papers
Knowledge Transfer from LLMs to Provenance Analysis: A Semantic-Augmented Method for APT Detection
Cryptography and Security
Finds hidden computer attacks using smart AI.
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Computation and Language
Finds hidden meanings that trick AI.
LLMpatronous: Harnessing the Power of LLMs For Vulnerability Detection
Cryptography and Security
AI finds computer bugs better and faster.