Transferable Dual-Domain Feature Importance Attack against AI-Generated Image Detector
By: Weiheng Zhu , Gang Cao , Jing Liu and more
Potential Business Impact:
Tricks AI image detectors to see fake pictures.
Recent AI-generated image (AIGI) detectors achieve impressive accuracy under clean condition. In view of antiforensics, it is significant to develop advanced adversarial attacks for evaluating the security of such detectors, which remains unexplored sufficiently. This letter proposes a Dual-domain Feature Importance Attack (DuFIA) scheme to invalidate AIGI detectors to some extent. Forensically important features are captured by the spatially interpolated gradient and frequency-aware perturbation. The adversarial transferability is enhanced by jointly modeling spatial and frequency-domain feature importances, which are fused to guide the optimization-based adversarial example generation. Extensive experiments across various AIGI detectors verify the cross-model transferability, transparency and robustness of DuFIA.
Similar Papers
A Sanity Check for Multi-In-Domain Face Forgery Detection in the Real World
CV and Pattern Recognition
Finds fake videos even with new tricks.
DINO-Detect: A Simple yet Effective Framework for Blur-Robust AI-Generated Image Detection
CV and Pattern Recognition
Finds fake pictures even when they're blurry.
Is Artificial Intelligence Generated Image Detection a Solved Problem?
CV and Pattern Recognition
Finds fake pictures made by computers.