Score: 1

AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers

Published: August 6, 2025 | arXiv ID: 2508.05691v1

By: Kai Yao, Marc Juarez

Potential Business Impact:

Proves who made computer-generated pictures.

Generative models are increasingly adopted in high-stakes domains, yet current deployments offer no mechanisms to verify the origin of model outputs. We address this gap by extending model fingerprinting techniques beyond the traditional collaborative setting to one where the model provider may act adversarially. To our knowledge, this is the first work to evaluate fingerprinting for provenance attribution under such a threat model. The methods rely on a trusted verifier that extracts secret fingerprints from the model's output space, unknown to the provider, and trains a model to predict and verify them. Our empirical evaluation shows that our methods achieve near-zero FPR@95%TPR for instances of GAN and diffusion models, even when tested on small modifications to the original architecture and training data. Moreover, the methods remain robust against adversarial attacks that actively modify the outputs to bypass detection. Source codes are available at https://github.com/PSMLab/authprint.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Cryptography and Security