CausalCLIP: Causally-Informed Feature Disentanglement and Filtering for Generalizable Detection of Generated Images
By: Bo Liu, Qiao Qin, Qinghui He
Potential Business Impact:
Finds fake pictures even from new AI.
The rapid advancement of generative models has increased the demand for generated image detectors capable of generalizing across diverse and evolving generation techniques. However, existing methods, including those leveraging pre-trained vision-language models, often produce highly entangled representations, mixing task-relevant forensic cues (causal features) with spurious or irrelevant patterns (non-causal features), thus limiting generalization. To address this issue, we propose CausalCLIP, a framework that explicitly disentangles causal from non-causal features and employs targeted filtering guided by causal inference principles to retain only the most transferable and discriminative forensic cues. By modeling the generation process with a structural causal model and enforcing statistical independence through Gumbel-Softmax-based feature masking and Hilbert-Schmidt Independence Criterion (HSIC) constraints, CausalCLIP isolates stable causal features robust to distribution shifts. When tested on unseen generative models from different series, CausalCLIP demonstrates strong generalization ability, achieving improvements of 6.83% in accuracy and 4.06% in average precision over state-of-the-art methods.
Similar Papers
CLIP-Flow: A Universal Discriminator for AI-Generated Images Inspired by Anomaly Detection
CV and Pattern Recognition
Finds fake pictures made by computers.
Causal Disentanglement and Cross-Modal Alignment for Enhanced Few-Shot Learning
CV and Pattern Recognition
Teaches computers to learn new things with fewer examples.
How Noise Benefits AI-generated Image Detection
CV and Pattern Recognition
Finds fake pictures made by computers.