Where's the liability in the Generative Era? Recovery-based Black-Box Detection of AI-Generated Content
By: Haoyue Bai , Yiyou Sun , Wei Cheng and more
Potential Business Impact:
Finds fake pictures made by computers.
The recent proliferation of photorealistic images created by generative models has sparked both excitement and concern, as these images are increasingly indistinguishable from real ones to the human eye. While offering new creative and commercial possibilities, the potential for misuse, such as in misinformation and fraud, highlights the need for effective detection methods. Current detection approaches often rely on access to model weights or require extensive collections of real image datasets, limiting their scalability and practical application in real world scenarios. In this work, we introduce a novel black box detection framework that requires only API access, sidestepping the need for model weights or large auxiliary datasets. Our approach leverages a corrupt and recover strategy: by masking part of an image and assessing the model ability to reconstruct it, we measure the likelihood that the image was generated by the model itself. For black-box models that do not support masked image inputs, we incorporate a cost efficient surrogate model trained to align with the target model distribution, enhancing detection capability. Our framework demonstrates strong performance, outperforming baseline methods by 4.31% in mean average precision across eight diffusion model variant datasets.
Similar Papers
Robustness in AI-Generated Detection: Enhancing Resistance to Adversarial Attacks
CV and Pattern Recognition
Stops fake faces from fooling computer detectors.
GenAI Confessions: Black-box Membership Inference for Generative Image Models
CV and Pattern Recognition
Finds if AI used your art to learn.
Unmasking Synthetic Realities in Generative AI: A Comprehensive Review of Adversarially Robust Deepfake Detection Systems
Cryptography and Security
Finds fake videos to stop lies.