Frequency Bias Matters: Diving into Robust and Generalized Deep Image Forgery Detection
By: Chi Liu , Tianqing Zhu , Wanlei Zhou and more
Potential Business Impact:
Finds fake pictures made by computers.
As deep image forgery powered by AI generative models, such as GANs, continues to challenge today's digital world, detecting AI-generated forgeries has become a vital security topic. Generalizability and robustness are two critical concerns of a forgery detector, determining its reliability when facing unknown GANs and noisy samples in an open world. Although many studies focus on improving these two properties, the root causes of these problems have not been fully explored, and it is unclear if there is a connection between them. Moreover, despite recent achievements in addressing these issues from image forensic or anti-forensic aspects, a universal method that can contribute to both sides simultaneously remains practically significant yet unavailable. In this paper, we provide a fundamental explanation of these problems from a frequency perspective. Our analysis reveals that the frequency bias of a DNN forgery detector is a possible cause of generalization and robustness issues. Based on this finding, we propose a two-step frequency alignment method to remove the frequency discrepancy between real and fake images, offering double-sided benefits: it can serve as a strong black-box attack against forgery detectors in the anti-forensic context or, conversely, as a universal defense to improve detector reliability in the forensic context. We also develop corresponding attack and defense implementations and demonstrate their effectiveness, as well as the effect of the frequency alignment method, in various experimental settings involving twelve detectors, eight forgery models, and five metrics.
Similar Papers
A Dual-Branch CNN for Robust Detection of AI-Generated Facial Forgeries
CV and Pattern Recognition
Finds fake faces in pictures better than people.
Disruptive Attacks on Face Swapping via Low-Frequency Perceptual Perturbations
CV and Pattern Recognition
Stops fake videos from fooling people.
Beyond Spectral Peaks: Interpreting the Cues Behind Synthetic Image Detection
CV and Pattern Recognition
Finds fake pictures by looking for hidden patterns.