The Coherence Trap: When MLLM-Crafted Narratives Exploit Manipulated Visual Contexts
By: Yuchen Zhang , Yaxiong Wang , Yujiao Wu and more
Potential Business Impact:
Finds fake news made by smart computer programs.
The detection and grounding of multimedia manipulation has emerged as a critical challenge in combating AI-generated disinformation. While existing methods have made progress in recent years, we identify two fundamental limitations in current approaches: (1) Underestimation of MLLM-driven deception risk: prevailing techniques primarily address rule-based text manipulations, yet fail to account for sophisticated misinformation synthesized by multimodal large language models (MLLMs) that can dynamically generate semantically coherent, contextually plausible yet deceptive narratives conditioned on manipulated images; (2) Unrealistic misalignment artifacts: currently focused scenarios rely on artificially misaligned content that lacks semantic coherence, rendering them easily detectable. To address these gaps holistically, we propose a new adversarial pipeline that leverages MLLMs to generate high-risk disinformation. Our approach begins with constructing the MLLM-Driven Synthetic Multimodal (MDSM) dataset, where images are first altered using state-of-the-art editing techniques and then paired with MLLM-generated deceptive texts that maintain semantic consistency with the visual manipulations. Building upon this foundation, we present the Artifact-aware Manipulation Diagnosis via MLLM (AMD) framework featuring two key innovations: Artifact Pre-perception Encoding strategy and Manipulation-Oriented Reasoning, to tame MLLMs for the MDSM problem. Comprehensive experiments validate our framework's superior generalization capabilities as a unified architecture for detecting MLLM-powered multimodal deceptions.
Similar Papers
Interpretable and Reliable Detection of AI-Generated Images via Grounded Reasoning in MLLMs
CV and Pattern Recognition
Finds fake pictures and shows why.
Beyond Artificial Misalignment: Detecting and Grounding Semantic-Coordinated Multimodal Manipulations
CV and Pattern Recognition
Finds fake pictures with matching fake stories.
Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models
CV and Pattern Recognition
Helps computers spot fake news stories.