Are Robust LLM Fingerprints Adversarially Robust?
By: Anshul Nasery , Edoardo Contente , Alkin Kaz and more
Potential Business Impact:
Cracks computer "fingerprints" that prove ownership.
Model fingerprinting has emerged as a promising paradigm for claiming model ownership. However, robustness evaluations of these schemes have mostly focused on benign perturbations such as incremental fine-tuning, model merging, and prompting. Lack of systematic investigations into {\em adversarial robustness} against a malicious model host leaves current systems vulnerable. To bridge this gap, we first define a concrete, practical threat model against model fingerprinting. We then take a critical look at existing model fingerprinting schemes to identify their fundamental vulnerabilities. Based on these, we develop adaptive adversarial attacks tailored for each vulnerability, and demonstrate that these can bypass model authentication completely for ten recently proposed fingerprinting schemes while maintaining high utility of the model for the end users. Our work encourages fingerprint designers to adopt adversarial robustness by design. We end with recommendations for future fingerprinting methods.
Similar Papers
Fragile by Design: On the Limits of Adversarial Defenses in Personalized Generation
CV and Pattern Recognition
Protects your face from AI stealing your identity.
AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers
Cryptography and Security
Proves who made computer-generated pictures.
To See or Not to See -- Fingerprinting Devices in Adversarial Environments Amid Advanced Machine Learning
Cryptography and Security
Helps tell safe gadgets from bad ones.