SFANet: Spatial-Frequency Attention Network for Deepfake Detection
By: Vrushank Ahire , Aniruddh Muley , Shivam Zample and more
Potential Business Impact:
Finds fake videos better than before.
Detecting manipulated media has now become a pressing issue with the recent rise of deepfakes. Most existing approaches fail to generalize across diverse datasets and generation techniques. We thus propose a novel ensemble framework, combining the strengths of transformer-based architectures, such as Swin Transformers and ViTs, and texture-based methods, to achieve better detection accuracy and robustness. Our method introduces innovative data-splitting, sequential training, frequency splitting, patch-based attention, and face segmentation techniques to handle dataset imbalances, enhance high-impact regions (e.g., eyes and mouth), and improve generalization. Our model achieves state-of-the-art performance when tested on the DFWild-Cup dataset, a diverse subset of eight deepfake datasets. The ensemble benefits from the complementarity of these approaches, with transformers excelling in global feature extraction and texturebased methods providing interpretability. This work demonstrates that hybrid models can effectively address the evolving challenges of deepfake detection, offering a robust solution for real-world applications.
Similar Papers
Towards Generalizable Deepfake Detection with Spatial-Frequency Collaborative Learning and Hierarchical Cross-Modal Fusion
CV and Pattern Recognition
Finds fake videos better, even new kinds.
Classifying Deepfakes Using Swin Transformers
CV and Pattern Recognition
Finds fake pictures better than old ways.
DFCon: Attention-Driven Supervised Contrastive Learning for Robust Deepfake Detection
CV and Pattern Recognition
Finds fake videos to stop lies.