Detecting Localized Deepfakes: How Well Do Synthetic Image Detectors Handle Inpainting?
By: Serafino Pandolfini , Lorenzo Pellegrini , Matteo Ferrara and more
The rapid progress of generative AI has enabled highly realistic image manipulations, including inpainting and region-level editing. These approaches preserve most of the original visual context and are increasingly exploited in cybersecurity-relevant threat scenarios. While numerous detectors have been proposed for identifying fully synthetic images, their ability to generalize to localized manipulations remains insufficiently characterized. This work presents a systematic evaluation of state-of-the-art detectors, originally trained for the deepfake detection on fully synthetic images, when applied to a distinct challenge: localized inpainting detection. The study leverages multiple datasets spanning diverse generators, mask sizes, and inpainting techniques. Our experiments show that models trained on a large set of generators exhibit partial transferability to inpainting-based edits and can reliably detect medium- and large-area manipulations or regeneration-style inpainting, outperforming many existing ad hoc detection approaches.
Similar Papers
Realism to Deception: Investigating Deepfake Detectors Against Face Enhancement
CV and Pattern Recognition
Makes fake faces harder to spot.
From Inpainting to Layer Decomposition: Repurposing Generative Inpainting Models for Image Layer Decomposition
CV and Pattern Recognition
Lets you edit parts of a picture separately.
Combating Digitally Altered Images: Deepfake Detection
CV and Pattern Recognition
Finds fake pictures and videos made by computers.