Attacks and Defenses Against LLM Fingerprinting
By: Kevin Kurian, Ethan Holland, Sean Oesch
Potential Business Impact:
Stops hackers from guessing which AI made a text.
As large language models are increasingly deployed in sensitive environments, fingerprinting attacks pose significant privacy and security risks. We present a study of LLM fingerprinting from both offensive and defensive perspectives. Our attack methodology uses reinforcement learning to automatically optimize query selection, achieving better fingerprinting accuracy with only 3 queries compared to randomly selecting 3 queries from the same pool. Our defensive approach employs semantic-preserving output filtering through a secondary LLM to obfuscate model identity while maintaining semantic integrity. The defensive method reduces fingerprinting accuracy across tested models while preserving output quality. These contributions show the potential to improve fingerprinting tools capabilities while providing practical mitigation strategies against fingerprinting attacks.
Similar Papers
SoK: Large Language Model Copyright Auditing via Fingerprinting
Cryptography and Security
Protects AI from being copied or stolen.
Attack and defense techniques in large language models: A survey and new perspectives
Cryptography and Security
Protects smart computer programs from being tricked.
A Survey: Towards Privacy and Security in Mobile Large Language Models
Cryptography and Security
Keeps your phone's smart talk private and safe.