Rethinking Cross-Generator Image Forgery Detection through DINOv3
By: Zhenglin Huang , Jason Li , Haiquan Wen and more
Potential Business Impact:
Finds fake pictures made by many different AI.
As generative models become increasingly diverse and powerful, cross-generator detection has emerged as a new challenge. Existing detection methods often memorize artifacts of specific generative models rather than learning transferable cues, leading to substantial failures on unseen generators. Surprisingly, this work finds that frozen visual foundation models, especially DINOv3, already exhibit strong cross-generator detection capability without any fine-tuning. Through systematic studies on frequency, spatial, and token perspectives, we observe that DINOv3 tends to rely on global, low-frequency structures as weak but transferable authenticity cues instead of high-frequency, generator-specific artifacts. Motivated by this insight, we introduce a simple, training-free token-ranking strategy followed by a lightweight linear probe to select a small subset of authenticity-relevant tokens. This token subset consistently improves detection accuracy across all evaluated datasets. Our study provides empirical evidence and a feasible hypothesis for understanding why foundation models generalize across diverse generators, offering a universal, efficient, and interpretable baseline for image forgery detection.
Similar Papers
Frequency Bias Matters: Diving into Robust and Generalized Deep Image Forgery Detection
Cryptography and Security
Finds fake pictures made by computers.
DINOv3
CV and Pattern Recognition
Teaches computers to see and understand images better.
DINO-Detect: A Simple yet Effective Framework for Blur-Robust AI-Generated Image Detection
CV and Pattern Recognition
Finds fake pictures even when they're blurry.