Exploiting the Final Component of Generator Architectures for AI-Generated Image Detection
By: Yanzhu Liu , Xiao Liu , Yuexuan Wang and more
Potential Business Impact:
Spots fake pictures made by new AI tools.
With the rapid proliferation of powerful image generators, accurate detection of AI-generated images has become essential for maintaining a trustworthy online environment. However, existing deepfake detectors often generalize poorly to images produced by unseen generators. Notably, despite being trained under vastly different paradigms, such as diffusion or autoregressive modeling, many modern image generators share common final architectural components that serve as the last stage for converting intermediate representations into images. Motivated by this insight, we propose to "contaminate" real images using the generator's final component and train a detector to distinguish them from the original real images. We further introduce a taxonomy based on generators' final components and categorize 21 widely used generators accordingly, enabling a comprehensive investigation of our method's generalization capability. Using only 100 samples from each of three representative categories, our detector-fine-tuned on the DINOv3 backbone-achieves an average accuracy of 98.83% across 22 testing sets from unseen generators.
Similar Papers
Generalized Design Choices for Deepfake Detectors
CV and Pattern Recognition
Finds fake videos more reliably.
Rethinking Cross-Generator Image Forgery Detection through DINOv3
CV and Pattern Recognition
Finds fake pictures made by many different AI.
Methods and Trends in Detecting AI-Generated Images: A Comprehensive Review
CV and Pattern Recognition
Finds fake pictures made by AI.