Beyond Spatial Frequency: Pixel-wise Temporal Frequency-based Deepfake Video Detection
By: Taehoon Kim , Jongwook Choi , Yonghyun Jeong and more
Potential Business Impact:
Finds fake videos by spotting weird movements.
We introduce a deepfake video detection approach that exploits pixel-wise temporal inconsistencies, which traditional spatial frequency-based detectors often overlook. Traditional detectors represent temporal information merely by stacking spatial frequency spectra across frames, resulting in the failure to detect temporal artifacts in the pixel plane. Our approach performs a 1D Fourier transform on the time axis for each pixel, extracting features highly sensitive to temporal inconsistencies, especially in areas prone to unnatural movements. To precisely locate regions containing the temporal artifacts, we introduce an attention proposal module trained in an end-to-end manner. Additionally, our joint transformer module effectively integrates pixel-wise temporal frequency features with spatio-temporal context features, expanding the range of detectable forgery artifacts. Our framework represents a significant advancement in deepfake video detection, providing robust performance across diverse and challenging detection scenarios.
Similar Papers
Towards Generalizable Deepfake Detection with Spatial-Frequency Collaborative Learning and Hierarchical Cross-Modal Fusion
CV and Pattern Recognition
Finds fake videos better, even new kinds.
Deepfake Detection with Spatio-Temporal Consistency and Attention
CV and Pattern Recognition
Finds fake videos by spotting tiny errors.
Audio-Visual Deepfake Detection With Local Temporal Inconsistencies
CV and Pattern Recognition
Spots fake videos by checking sound and video match.