UMCL: Unimodal-generated Multimodal Contrastive Learning for Cross-compression-rate Deepfake Detection
By: Ching-Yi Lai , Chih-Yu Jian , Pei-Cheng Chuang and more
Potential Business Impact:
Finds fake videos even when they are squeezed.
In deepfake detection, the varying degrees of compression employed by social media platforms pose significant challenges for model generalization and reliability. Although existing methods have progressed from single-modal to multimodal approaches, they face critical limitations: single-modal methods struggle with feature degradation under data compression in social media streaming, while multimodal approaches require expensive data collection and labeling and suffer from inconsistent modal quality or accessibility in real-world scenarios. To address these challenges, we propose a novel Unimodal-generated Multimodal Contrastive Learning (UMCL) framework for robust cross-compression-rate (CCR) deepfake detection. In the training stage, our approach transforms a single visual modality into three complementary features: compression-robust rPPG signals, temporal landmark dynamics, and semantic embeddings from pre-trained vision-language models. These features are explicitly aligned through an affinity-driven semantic alignment (ASA) strategy, which models inter-modal relationships through affinity matrices and optimizes their consistency through contrastive learning. Subsequently, our cross-quality similarity learning (CQSL) strategy enhances feature robustness across compression rates. Extensive experiments demonstrate that our method achieves superior performance across various compression rates and manipulation types, establishing a new benchmark for robust deepfake detection. Notably, our approach maintains high detection accuracy even when individual features degrade, while providing interpretable insights into feature relationships through explicit alignment.
Similar Papers
Compression Beyond Pixels: Semantic Compression with Multimodal Foundation Models
CV and Pattern Recognition
Makes pictures smaller, keeping their meaning.
MCA: Modality Composition Awareness for Robust Composed Multimodal Retrieval
Computation and Language
Helps AI understand mixed text and pictures better.
Compression then Matching: An Efficient Pre-training Paradigm for Multimodal Embedding
CV and Pattern Recognition
Makes computers understand pictures and words together better.