Score: 1

Rethinking Cross-Generator Image Forgery Detection through DINOv3

Published: November 27, 2025 | arXiv ID: 2511.22471v1

By: Zhenglin Huang , Jason Li , Haiquan Wen and more

Potential Business Impact:

Finds fake pictures made by many different AI.

Business Areas:
Image Recognition Data and Analytics, Software

As generative models become increasingly diverse and powerful, cross-generator detection has emerged as a new challenge. Existing detection methods often memorize artifacts of specific generative models rather than learning transferable cues, leading to substantial failures on unseen generators. Surprisingly, this work finds that frozen visual foundation models, especially DINOv3, already exhibit strong cross-generator detection capability without any fine-tuning. Through systematic studies on frequency, spatial, and token perspectives, we observe that DINOv3 tends to rely on global, low-frequency structures as weak but transferable authenticity cues instead of high-frequency, generator-specific artifacts. Motivated by this insight, we introduce a simple, training-free token-ranking strategy followed by a lightweight linear probe to select a small subset of authenticity-relevant tokens. This token subset consistently improves detection accuracy across all evaluated datasets. Our study provides empirical evidence and a feasible hypothesis for understanding why foundation models generalize across diverse generators, offering a universal, efficient, and interpretable baseline for image forgery detection.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition