SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection
By: Yachao Liang , Min Yu , Gang Li and more
Potential Business Impact:
Spots fake videos by listening to voices.
Detection of face forgery videos remains a formidable challenge in the field of digital forensics, especially the generalization to unseen datasets and common perturbations. In this paper, we tackle this issue by leveraging the synergy between audio and visual speech elements, embarking on a novel approach through audio-visual speech representation learning. Our work is motivated by the finding that audio signals, enriched with speech content, can provide precise information effectively reflecting facial movements. To this end, we first learn precise audio-visual speech representations on real videos via a self-supervised masked prediction task, which encodes both local and global semantic information simultaneously. Then, the derived model is directly transferred to the forgery detection task. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of cross-dataset generalization and robustness, without the participation of any fake video in model training. Code is available at https://github.com/Eleven4AI/SpeechForensics.
Similar Papers
Investigating self-supervised representations for audio-visual deepfake detection
CV and Pattern Recognition
Finds fake videos by listening and watching.
AuViRe: Audio-visual Speech Representation Reconstruction for Deepfake Temporal Localization
CV and Pattern Recognition
Finds fake videos by checking if sound and lips match.
Towards Generalizable Deepfake Detection via Forgery-aware Audio-Visual Adaptation: A Variational Bayesian Approach
Multimedia
Catches fake videos and sounds by checking how they match.