Score: 0

Attacks and Defenses Against LLM Fingerprinting

Published: August 12, 2025 | arXiv ID: 2508.09021v1

By: Kevin Kurian, Ethan Holland, Sean Oesch

Potential Business Impact:

Stops hackers from guessing which AI made a text.

As large language models are increasingly deployed in sensitive environments, fingerprinting attacks pose significant privacy and security risks. We present a study of LLM fingerprinting from both offensive and defensive perspectives. Our attack methodology uses reinforcement learning to automatically optimize query selection, achieving better fingerprinting accuracy with only 3 queries compared to randomly selecting 3 queries from the same pool. Our defensive approach employs semantic-preserving output filtering through a secondary LLM to obfuscate model identity while maintaining semantic integrity. The defensive method reduces fingerprinting accuracy across tested models while preserving output quality. These contributions show the potential to improve fingerprinting tools capabilities while providing practical mitigation strategies against fingerprinting attacks.

Page Count
11 pages

Category
Computer Science:
Cryptography and Security