Investigating self-supervised representations for audio-visual deepfake detection
By: Dragos-Alexandru Boldisor , Stefan Smeu , Dan Oneata and more
Potential Business Impact:
Finds fake videos by listening and watching.
Self-supervised representations excel at many vision and speech tasks, but their potential for audio-visual deepfake detection remains underexplored. Unlike prior work that uses these features in isolation or buried within complex architectures, we systematically evaluate them across modalities (audio, video, multimodal) and domains (lip movements, generic visual content). We assess three key dimensions: detection effectiveness, interpretability of encoded information, and cross-modal complementarity. We find that most self-supervised features capture deepfake-relevant information, and that this information is complementary. Moreover, models primarily attend to semantically meaningful regions rather than spurious artifacts. Yet none generalize reliably across datasets. This generalization failure likely stems from dataset characteristics, not from the features themselves latching onto superficial patterns. These results expose both the promise and fundamental challenges of self-supervised representations for deepfake detection: while they learn meaningful patterns, achieving robust cross-domain performance remains elusive.
Similar Papers
SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection
CV and Pattern Recognition
Spots fake videos by listening to voices.
Next-Frame Feature Prediction for Multimodal Deepfake Detection and Temporal Localization
CV and Pattern Recognition
Finds fake videos by predicting what happens next.
KLASSify to Verify: Audio-Visual Deepfake Detection Using SSL-based Audio and Handcrafted Visual Features
Audio and Speech Processing
Finds fake videos by listening to the sound.