DFA-CON: A Contrastive Learning Approach for Detecting Copyright Infringement in DeepFake Art
By: Haroon Wahab, Hassan Ugail, Irfan Mehmood
Potential Business Impact:
Finds fake art made by computers.
Recent proliferation of generative AI tools for visual content creation-particularly in the context of visual artworks-has raised serious concerns about copyright infringement and forgery. The large-scale datasets used to train these models often contain a mixture of copyrighted and non-copyrighted artworks. Given the tendency of generative models to memorize training patterns, they are susceptible to varying degrees of copyright violation. Building on the recently proposed DeepfakeArt Challenge benchmark, this work introduces DFA-CON, a contrastive learning framework designed to detect copyright-infringing or forged AI-generated art. DFA-CON learns a discriminative representation space, posing affinity among original artworks and their forged counterparts within a contrastive learning framework. The model is trained across multiple attack types, including inpainting, style transfer, adversarial perturbation, and cutmix. Evaluation results demonstrate robust detection performance across most attack types, outperforming recent pretrained foundation models. Code and model checkpoints will be released publicly upon acceptance.
Similar Papers
DATA: Multi-Disentanglement based Contrastive Learning for Open-World Semi-Supervised Deepfake Attribution
CV and Pattern Recognition
Finds fake videos even when they are new.
Comparative Analysis of Deepfake Detection Models: New Approaches and Perspectives
CV and Pattern Recognition
Finds fake videos to stop lies.
Supervised Contrastive Learning for Few-Shot AI-Generated Image Detection and Attribution
CV and Pattern Recognition
Finds fake pictures made by AI.