Score: 2

SFANet: Spatial-Frequency Attention Network for Deepfake Detection

Published: October 6, 2025 | arXiv ID: 2510.04630v1

By: Vrushank Ahire , Aniruddh Muley , Shivam Zample and more

Potential Business Impact:

Finds fake videos better than before.

Business Areas:
Facial Recognition Data and Analytics, Software

Detecting manipulated media has now become a pressing issue with the recent rise of deepfakes. Most existing approaches fail to generalize across diverse datasets and generation techniques. We thus propose a novel ensemble framework, combining the strengths of transformer-based architectures, such as Swin Transformers and ViTs, and texture-based methods, to achieve better detection accuracy and robustness. Our method introduces innovative data-splitting, sequential training, frequency splitting, patch-based attention, and face segmentation techniques to handle dataset imbalances, enhance high-impact regions (e.g., eyes and mouth), and improve generalization. Our model achieves state-of-the-art performance when tested on the DFWild-Cup dataset, a diverse subset of eight deepfake datasets. The ensemble benefits from the complementarity of these approaches, with transformers excelling in global feature extraction and texturebased methods providing interpretability. This work demonstrates that hybrid models can effectively address the evolving challenges of deepfake detection, offering a robust solution for real-world applications.

Country of Origin
🇮🇳 🇦🇺 India, Australia

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition