Score: 0

Unlocking the Capabilities of Large Vision-Language Models for Generalizable and Explainable Deepfake Detection

Published: March 19, 2025 | arXiv ID: 2503.14853v2

By: Peipeng Yu , Jianwei Fei , Hui Gao and more

Potential Business Impact:

Finds fake pictures by understanding what they show.

Business Areas:
Facial Recognition Data and Analytics, Software

Current Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in understanding multimodal data, but their potential remains underexplored for deepfake detection due to the misalignment of their knowledge and forensics patterns. To this end, we present a novel framework that unlocks LVLMs' potential capabilities for deepfake detection. Our framework includes a Knowledge-guided Forgery Detector (KFD), a Forgery Prompt Learner (FPL), and a Large Language Model (LLM). The KFD is used to calculate correlations between image features and pristine/deepfake image description embeddings, enabling forgery classification and localization. The outputs of the KFD are subsequently processed by the Forgery Prompt Learner to construct fine-grained forgery prompt embeddings. These embeddings, along with visual and question prompt embeddings, are fed into the LLM to generate textual detection responses. Extensive experiments on multiple benchmarks, including FF++, CDF2, DFD, DFDCP, DFDC, and DF40, demonstrate that our scheme surpasses state-of-the-art methods in generalization performance, while also supporting multi-turn dialogue capabilities.

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition