MMD-Thinker: Adaptive Multi-Dimensional Thinking for Multimodal Misinformation Detection
By: Junjie Wu, Guohong Fu
Potential Business Impact:
Finds fake online pictures and stories better.
Multimodal misinformation floods on various social media, and continues to evolve in the era of AI-generated content (AIGC). The emerged misinformation with low creation cost and high deception poses significant threats to society. While recent studies leverage general-purpose multimodal large language models (MLLMs) to achieve remarkable results in detection, they encounter two critical limitations: (1) Insufficient reasoning, where general-purpose MLLMs often follow the uniform reasoning paradigm but generate inaccurate explanations and judgments, due to the lack of the task-specific knowledge of multimodal misinformation detection. (2) Reasoning biases, where a single thinking mode make detectors a suboptimal path for judgment, struggling to keep pace with the fast-growing and intricate multimodal misinformation. In this paper, we propose MMD-Thinker, a two-stage framework for multimodal misinformation detection through adaptive multi-dimensional thinking. First, we develop tailor-designed thinking mode for multimodal misinformation detection. Second, we adopt task-specific instruction tuning to inject the tailored thinking mode into general-purpose MLLMs. Third, we further leverage reinforcement learning strategy with a mixed advantage function, which incentivizes the reasoning capabilities in trajectories. Furthermore, we construct the multimodal misinformation reasoning (MMR) dataset, encompasses more than 8K image-text pairs with both reasoning processes and classification labels, to make progress in the relam of multimodal misinformation detection. Experimental results demonstrate that our proposed MMD-Thinker achieves state-of-the-art performance on both in-domain and out-of-domain benchmark datasets, while maintaining flexible inference and token usage. Code will be publicly available at Github.
Similar Papers
Enhancing Multimodal Misinformation Detection by Replaying the Whole Story from Image Modality Perspective
CV and Pattern Recognition
Helps find fake news by checking text and pictures.
Towards Robust and Realible Multimodal Misinformation Recognition with Incomplete Modality
Multimedia
Finds fake news even if parts are missing.
A New Dataset and Benchmark for Grounding Multimodal Misinformation
Social and Information Networks
Finds fake videos by checking words, sounds, and pictures.