AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers
By: Kai Yao, Marc Juarez
Potential Business Impact:
Proves who made computer-generated pictures.
Generative models are increasingly adopted in high-stakes domains, yet current deployments offer no mechanisms to verify the origin of model outputs. We address this gap by extending model fingerprinting techniques beyond the traditional collaborative setting to one where the model provider may act adversarially. To our knowledge, this is the first work to evaluate fingerprinting for provenance attribution under such a threat model. The methods rely on a trusted verifier that extracts secret fingerprints from the model's output space, unknown to the provider, and trains a model to predict and verify them. Our empirical evaluation shows that our methods achieve near-zero FPR@95%TPR for instances of GAN and diffusion models, even when tested on small modifications to the original architecture and training data. Moreover, the methods remain robust against adversarial attacks that actively modify the outputs to bypass detection. Source codes are available at https://github.com/PSMLab/authprint.
Similar Papers
Are Robust LLM Fingerprints Adversarially Robust?
Cryptography and Security
Cracks computer "fingerprints" that prove ownership.
Causal Fingerprints of AI Generative Models
CV and Pattern Recognition
Finds fake pictures by spotting AI's hidden style.
PALADIN : Robust Neural Fingerprinting for Text-to-Image Diffusion Models
CV and Pattern Recognition
Identifies fake images made by AI.