Fair and Interpretable Deepfake Detection in Videos
By: Akihito Yoshii, Ryosuke Sonoda, Ramya Srinivasan
Potential Business Impact:
Finds fake videos fairly for everyone.
Existing deepfake detection methods often exhibit bias, lack transparency, and fail to capture temporal information, leading to biased decisions and unreliable results across different demographic groups. In this paper, we propose a fairness-aware deepfake detection framework that integrates temporal feature learning and demographic-aware data augmentation to enhance fairness and interpretability. Our method leverages sequence-based clustering for temporal modeling of deepfake videos and concept extraction to improve detection reliability while also facilitating interpretable decisions for non-expert users. Additionally, we introduce a demography-aware data augmentation method that balances underrepresented groups and applies frequency-domain transformations to preserve deepfake artifacts, thereby mitigating bias and improving generalization. Extensive experiments on FaceForensics++, DFD, Celeb-DF, and DFDC datasets using state-of-the-art (SoTA) architectures (Xception, ResNet) demonstrate the efficacy of the proposed method in obtaining the best tradeoff between fairness and accuracy when compared to SoTA.
Similar Papers
Decoupling Bias, Aligning Distributions: Synergistic Fairness Optimization for Deepfake Detection
CV and Pattern Recognition
Makes fake video checkers fair for everyone.
Reliable and Reproducible Demographic Inference for Fairness in Face Analysis
CV and Pattern Recognition
Makes face-reading programs fairer for everyone.
A Hybrid Deep Learning and Forensic Approach for Robust Deepfake Detection
CV and Pattern Recognition
Finds fake videos by combining clues.