Next-Frame Feature Prediction for Multimodal Deepfake Detection and Temporal Localization
By: Ashutosh Anshul , Shreyas Gopal , Deepu Rajan and more
Potential Business Impact:
Finds fake videos by predicting what happens next.
Recent multimodal deepfake detection methods designed for generalization conjecture that single-stage supervised training struggles to generalize across unseen manipulations and datasets. However, such approaches that target generalization require pretraining over real samples. Additionally, these methods primarily focus on detecting audio-visual inconsistencies and may overlook intra-modal artifacts causing them to fail against manipulations that preserve audio-visual alignment. To address these limitations, we propose a single-stage training framework that enhances generalization by incorporating next-frame prediction for both uni-modal and cross-modal features. Additionally, we introduce a window-level attention mechanism to capture discrepancies between predicted and actual frames, enabling the model to detect local artifacts around every frame, which is crucial for accurately classifying fully manipulated videos and effectively localizing deepfake segments in partially spoofed samples. Our model, evaluated on multiple benchmark datasets, demonstrates strong generalization and precise temporal localization.
Similar Papers
Towards Generalizable Deepfake Detection with Spatial-Frequency Collaborative Learning and Hierarchical Cross-Modal Fusion
CV and Pattern Recognition
Finds fake videos better, even new kinds.
Investigating self-supervised representations for audio-visual deepfake detection
CV and Pattern Recognition
Finds fake videos by listening and watching.
Beyond Flicker: Detecting Kinematic Inconsistencies for Generalizable Deepfake Video Detection
CV and Pattern Recognition
Spots fake videos by finding weird face movements.