Causal Fingerprints of AI Generative Models
By: Hui Xu , Chi Liu , Congcong Zhu and more
Potential Business Impact:
Finds fake pictures by spotting AI's hidden style.
AI generative models leave implicit traces in their generated images, which are commonly referred to as model fingerprints and are exploited for source attribution. Prior methods rely on model-specific cues or synthesis artifacts, yielding limited fingerprints that may generalize poorly across different generative models. We argue that a complete model fingerprint should reflect the causality between image provenance and model traces, a direction largely unexplored. To this end, we conceptualize the \emph{causal fingerprint} of generative models, and propose a causality-decoupling framework that disentangles it from image-specific content and style in a semantic-invariant latent space derived from pre-trained diffusion reconstruction residual. We further enhance fingerprint granularity with diverse feature representations. We validate causality by assessing attribution performance across representative GANs and diffusion models and by achieving source anonymization using counterfactual examples generated from causal fingerprints. Experiments show our approach outperforms existing methods in model attribution, indicating strong potential for forgery detection, model copyright tracing, and identity protection.
Similar Papers
AI-Generated Image Detection: An Empirical Study and Future Research Directions
CV and Pattern Recognition
Finds fake videos and pictures made by computers.
Could AI Trace and Explain the Origins of AI-Generated Images and Text?
Computation and Language
Finds fake AI pictures and writing.
Natural Fingerprints of Large Language Models
Computation and Language
Finds hidden "fingerprints" in AI writing.