Rethinking Individual Fairness in Deepfake Detection
By: Aryana Hou , Li Lin , Justin Li and more
Potential Business Impact:
Makes fake videos fairer for everyone.
Generative AI models have substantially improved the realism of synthetic media, yet their misuse through sophisticated DeepFakes poses significant risks. Despite recent advances in deepfake detection, fairness remains inadequately addressed, enabling deepfake markers to exploit biases against specific populations. While previous studies have emphasized group-level fairness, individual fairness (i.e., ensuring similar predictions for similar individuals) remains largely unexplored. In this work, we identify for the first time that the original principle of individual fairness fundamentally fails in the context of deepfake detection, revealing a critical gap previously unexplored in the literature. To mitigate it, we propose the first generalizable framework that can be integrated into existing deepfake detectors to enhance individual fairness and generalization. Extensive experiments conducted on leading deepfake datasets demonstrate that our approach significantly improves individual fairness while maintaining robust detection performance, outperforming state-of-the-art methods. The code is available at https://github.com/Purdue-M2/Individual-Fairness-Deepfake-Detection.
Similar Papers
Decoupling Bias, Aligning Distributions: Synergistic Fairness Optimization for Deepfake Detection
CV and Pattern Recognition
Makes fake video checkers fair for everyone.
Fair and Interpretable Deepfake Detection in Videos
CV and Pattern Recognition
Finds fake videos fairly for everyone.
Robust AI-Generated Face Detection with Imbalanced Data
CV and Pattern Recognition
Finds fake videos even when they look real.