Unlocking the Capabilities of Large Vision-Language Models for Generalizable and Explainable Deepfake Detection
By: Peipeng Yu , Jianwei Fei , Hui Gao and more
Potential Business Impact:
Finds fake pictures by understanding what they show.
Current Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in understanding multimodal data, but their potential remains underexplored for deepfake detection due to the misalignment of their knowledge and forensics patterns. To this end, we present a novel framework that unlocks LVLMs' potential capabilities for deepfake detection. Our framework includes a Knowledge-guided Forgery Detector (KFD), a Forgery Prompt Learner (FPL), and a Large Language Model (LLM). The KFD is used to calculate correlations between image features and pristine/deepfake image description embeddings, enabling forgery classification and localization. The outputs of the KFD are subsequently processed by the Forgery Prompt Learner to construct fine-grained forgery prompt embeddings. These embeddings, along with visual and question prompt embeddings, are fed into the LLM to generate textual detection responses. Extensive experiments on multiple benchmarks, including FF++, CDF2, DFD, DFDCP, DFDC, and DF40, demonstrate that our scheme surpasses state-of-the-art methods in generalization performance, while also supporting multi-turn dialogue capabilities.
Similar Papers
MLLM-Enhanced Face Forgery Detection: A Vision-Language Fusion Solution
CV and Pattern Recognition
Finds fake faces in videos better.
Identity-Aware Vision-Language Model for Explainable Face Forgery Detection
Multimedia
Finds fake pictures by checking if they make sense.
Unlocking the Forgery Detection Potential of Vanilla MLLMs: A Novel Training-Free Pipeline
CV and Pattern Recognition
Finds fake pictures without extra training.