Enhancing Abnormality Identification: Robust Out-of-Distribution Strategies for Deepfake Detection
By: Luca Maiano, Fabrizio Casadei, Irene Amerini
Potential Business Impact:
Finds fake videos even from new tricks.
Detecting deepfakes has become a critical challenge in Computer Vision and Artificial Intelligence. Despite significant progress in detection techniques, generalizing them to open-set scenarios continues to be a persistent difficulty. Neural networks are often trained on the closed-world assumption, but with new generative models constantly evolving, it is inevitable to encounter data generated by models that are not part of the training distribution. To address these challenges, in this paper, we propose two novel Out-Of-Distribution (OOD) detection approaches. The first approach is trained to reconstruct the input image, while the second incorporates an attention mechanism for detecting OODs. Our experiments validate the effectiveness of the proposed approaches compared to existing state-of-the-art techniques. Our method achieves promising results in deepfake detection and ranks among the top-performing configurations on the benchmark, demonstrating their potential for robust, adaptable solutions in dynamic, real-world applications.
Similar Papers
Revisiting Out-of-Distribution Detection in Real-time Object Detection: From Benchmark Pitfalls to a New Mitigation Paradigm
CV and Pattern Recognition
Teaches computers to ignore fake objects.
Local Background Features Matter in Out-of-Distribution Detection
CV and Pattern Recognition
Helps computers know when they see something new.
Dream-Box: Object-wise Outlier Generation for Out-of-Distribution Detection
CV and Pattern Recognition
Helps computers spot fake objects in pictures.