Morphology-optimized Multi-Scale Fusion: Combining Local Artifacts and Mesoscopic Semantics for Deepfake Detection and Localization
By: Chao Shuai , Gaojian Wang , Kun Pan and more
Potential Business Impact:
Finds fake parts in pictures better.
While the pursuit of higher accuracy in deepfake detection remains a central goal, there is an increasing demand for precise localization of manipulated regions. Despite the remarkable progress made in classification-based detection, accurately localizing forged areas remains a significant challenge. A common strategy is to incorporate forged region annotations during model training alongside manipulated images. However, such approaches often neglect the complementary nature of local detail and global semantic context, resulting in suboptimal localization performance. Moreover, an often-overlooked aspect is the fusion strategy between local and global predictions. Naively combining the outputs from both branches can amplify noise and errors, thereby undermining the effectiveness of the localization. To address these issues, we propose a novel approach that independently predicts manipulated regions using both local and global perspectives. We employ morphological operations to fuse the outputs, effectively suppressing noise while enhancing spatial coherence. Extensive experiments reveal the effectiveness of each module in improving the accuracy and robustness of forgery localization.
Similar Papers
Towards Generalizable Deepfake Detection with Spatial-Frequency Collaborative Learning and Hierarchical Cross-Modal Fusion
CV and Pattern Recognition
Finds fake videos better, even new kinds.
Next-Frame Feature Prediction for Multimodal Deepfake Detection and Temporal Localization
CV and Pattern Recognition
Finds fake videos by predicting what happens next.
A Novel Local Focusing Mechanism for Deepfake Detection Generalization
CV and Pattern Recognition
Finds fake videos even if they look different.