Score: 1

Towards Explainable Fake Image Detection with Multi-Modal Large Language Models

Published: April 19, 2025 | arXiv ID: 2504.14245v1

By: Yikun Ji , Yan Hong , Jiahui Zhan and more

Potential Business Impact:

Finds fake pictures by explaining how.

Business Areas:
Image Recognition Data and Analytics, Software

Progress in image generation raises significant public security concerns. We argue that fake image detection should not operate as a "black box". Instead, an ideal approach must ensure both strong generalization and transparency. Recent progress in Multi-modal Large Language Models (MLLMs) offers new opportunities for reasoning-based AI-generated image detection. In this work, we evaluate the capabilities of MLLMs in comparison to traditional detection methods and human evaluators, highlighting their strengths and limitations. Furthermore, we design six distinct prompts and propose a framework that integrates these prompts to develop a more robust, explainable, and reasoning-driven detection system. The code is available at https://github.com/Gennadiyev/mllm-defake.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition