AdaptPrompt: Parameter-Efficient Adaptation of VLMs for Generalizable Deepfake Detection
By: Yichen Jiang , Mohammed Talha Alam , Sohail Ahmed Khan and more
Recent advances in image generation have led to the widespread availability of highly realistic synthetic media, increasing the difficulty of reliable deepfake detection. A key challenge is generalization, as detectors trained on a narrow class of generators often fail when confronted with unseen models. In this work, we address the pressing need for generalizable detection by leveraging large vision-language models, specifically CLIP, to identify synthetic content across diverse generative techniques. First, we introduce Diff-Gen, a large-scale benchmark dataset comprising 100k diffusion-generated fakes that capture broad spectral artifacts unlike traditional GAN datasets. Models trained on Diff-Gen demonstrate stronger cross-domain generalization, particularly on previously unseen image generators. Second, we propose AdaptPrompt, a parameter-efficient transfer learning framework that jointly learns task-specific textual prompts and visual adapters while keeping the CLIP backbone frozen. We further show via layer ablation that pruning the final transformer block of the vision encoder enhances the retention of high-frequency generative artifacts, significantly boosting detection accuracy. Our evaluation spans 25 challenging test sets, covering synthetic content generated by GANs, diffusion models, and commercial tools, establishing a new state-of-the-art in both standard and cross-domain scenarios. We further demonstrate the framework's versatility through few-shot generalization (using as few as 320 images) and source attribution, enabling the precise identification of generator architectures in closed-set settings.
Similar Papers
Deepfake Detection that Generalizes Across Benchmarks
CV and Pattern Recognition
Finds fake videos even with new tricks.
GA2-CLIP: Generic Attribute Anchor for Efficient Prompt Tuningin Video-Language Models
CV and Pattern Recognition
Helps AI remember old lessons when learning new ones.
Generalizing Vision-Language Models with Dedicated Prompt Guidance
CV and Pattern Recognition
Helps AI understand new things better.