ThinkFake: Reasoning in Multimodal Large Language Models for AI-Generated Image Detection
By: Tai-Ming Huang , Wei-Tung Lin , Kai-Lung Hua and more
Potential Business Impact:
Finds fake pictures made by computers.
The increasing realism of AI-generated images has raised serious concerns about misinformation and privacy violations, highlighting the urgent need for accurate and interpretable detection methods. While existing approaches have made progress, most rely on binary classification without explanations or depend heavily on supervised fine-tuning, resulting in limited generalization. In this paper, we propose ThinkFake, a novel reasoning-based and generalizable framework for AI-generated image detection. Our method leverages a Multimodal Large Language Model (MLLM) equipped with a forgery reasoning prompt and is trained using Group Relative Policy Optimization (GRPO) reinforcement learning with carefully designed reward functions. This design enables the model to perform step-by-step reasoning and produce interpretable, structured outputs. We further introduce a structured detection pipeline to enhance reasoning quality and adaptability. Extensive experiments show that ThinkFake outperforms state-of-the-art methods on the GenImage benchmark and demonstrates strong zero-shot generalization on the challenging LOKI benchmark. These results validate our framework's effectiveness and robustness. Code will be released upon acceptance.
Similar Papers
Towards Explainable Fake Image Detection with Multi-Modal Large Language Models
CV and Pattern Recognition
Finds fake pictures by explaining how.
Interpretable and Reliable Detection of AI-Generated Images via Grounded Reasoning in MLLMs
CV and Pattern Recognition
Finds fake pictures and shows why.
Can Multi-modal (reasoning) LLMs work as deepfake detectors?
CV and Pattern Recognition
Finds fake pictures using smart computer brains.