Beyond Spectral Peaks: Interpreting the Cues Behind Synthetic Image Detection
By: Sara Mandelli , Diego Vila-Portela , David Vázquez-Padín and more
Potential Business Impact:
Finds fake pictures by looking for hidden patterns.
Over the years, the forensics community has proposed several deep learning-based detectors to mitigate the risks of generative AI. Recently, frequency-domain artifacts (particularly periodic peaks in the magnitude spectrum), have received significant attention, as they have been often considered a strong indicator of synthetic image generation. However, state-of-the-art detectors are typically used as black-boxes, and it still remains unclear whether they truly rely on these peaks. This limits their interpretability and trust. In this work, we conduct a systematic study to address this question. We propose a strategy to remove spectral peaks from images and analyze the impact of this operation on several detectors. In addition, we introduce a simple linear detector that relies exclusively on frequency peaks, providing a fully interpretable baseline free from the confounding influence of deep learning. Our findings reveal that most detectors are not fundamentally dependent on spectral peaks, challenging a widespread assumption in the field and paving the way for more transparent and reliable forensic tools.
Similar Papers
Frequency Bias Matters: Diving into Robust and Generalized Deep Image Forgery Detection
Cryptography and Security
Finds fake pictures made by computers.
Enhanced Deep Learning DeepFake Detection Integrating Handcrafted Features
CV and Pattern Recognition
Catches fake faces in pictures and videos.
Robust AI-Synthesized Image Detection via Multi-feature Frequency-aware Learning
Graphics
Finds fake pictures made by AI.