Score: 1

Are Robust LLM Fingerprints Adversarially Robust?

Published: September 30, 2025 | arXiv ID: 2509.26598v1

By: Anshul Nasery , Edoardo Contente , Alkin Kaz and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Cracks computer "fingerprints" that prove ownership.

Business Areas:
Penetration Testing Information Technology, Privacy and Security

Model fingerprinting has emerged as a promising paradigm for claiming model ownership. However, robustness evaluations of these schemes have mostly focused on benign perturbations such as incremental fine-tuning, model merging, and prompting. Lack of systematic investigations into {\em adversarial robustness} against a malicious model host leaves current systems vulnerable. To bridge this gap, we first define a concrete, practical threat model against model fingerprinting. We then take a critical look at existing model fingerprinting schemes to identify their fundamental vulnerabilities. Based on these, we develop adaptive adversarial attacks tailored for each vulnerability, and demonstrate that these can bypass model authentication completely for ten recently proposed fingerprinting schemes while maintaining high utility of the model for the end users. Our work encourages fingerprint designers to adopt adversarial robustness by design. We end with recommendations for future fingerprinting methods.

Country of Origin
🇺🇸 United States

Page Count
27 pages

Category
Computer Science:
Cryptography and Security