Task-Model Alignment: A Simple Path to Generalizable AI-Generated Image Detection
By: Ruoxin Chen , Jiahui Gao , Kaiqing Lin and more
Potential Business Impact:
Finds fake pictures by checking words and details.
Vision Language Models (VLMs) are increasingly adopted for AI-generated images (AIGI) detection, yet converting VLMs into detectors requires substantial resource, while the resulting models still exhibit severe hallucinations. To probe the core issue, we conduct an empirical analysis and observe two characteristic behaviors: (i) fine-tuning VLMs on high-level semantic supervision strengthens semantic discrimination and well generalize to unseen data; (ii) fine-tuning VLMs on low-level pixel-artifact supervision yields poor transfer. We attribute VLMs' underperformance to task-model misalignment: semantics-oriented VLMs inherently lack sensitivity to fine-grained pixel artifacts, and semantically non-discriminative pixel artifacts thus exceeds their inductive biases. In contrast, we observe that conventional pixel-artifact detectors capture low-level pixel artifacts yet exhibit limited semantic awareness relative to VLMs, highlighting that distinct models are better matched to distinct tasks. In this paper, we formalize AIGI detection as two complementary tasks--semantic consistency checking and pixel-artifact detection--and show that neglecting either induces systematic blind spots. Guided by this view, we introduce the Task-Model Alignment principle and instantiate it as a two-branch detector, AlignGemini, comprising a VLM fine-tuned exclusively with pure semantic supervision and a pixel-artifact expert trained exclusively with pure pixel-artifact supervision. By enforcing orthogonal supervision on two simplified datasets, each branch trains to its strengths, producing complementary discrimination over semantic and pixel cues. On five in-the-wild benchmarks, AlignGemini delivers a +9.5 gain in average accuracy, supporting task-model alignment as an effective path to generalizable AIGI detection.
Similar Papers
LVLM-Aided Alignment of Task-Specific Vision Models
CV and Pattern Recognition
Makes AI models understand things like people do.
Semantic Misalignment in Vision-Language Models under Perceptual Degradation
CV and Pattern Recognition
Makes self-driving cars safer by checking their "eyes."
Beyond Semantic Features: Pixel-level Mapping for Generalized AI-Generated Image Detection
CV and Pattern Recognition
Finds fake pictures made by new AI.