Robust and Calibrated Detection of Authentic Multimedia Content
By: Sarim Hashmi , Abdelrahman Elsayed , Mohammed Talha Alam and more
Potential Business Impact:
Finds fake videos even when they change.
Generative models can synthesize highly realistic content, so-called deepfakes, that are already being misused at scale to undermine digital media authenticity. Current deepfake detection methods are unreliable for two reasons: (i) distinguishing inauthentic content post-hoc is often impossible (e.g., with memorized samples), leading to an unbounded false positive rate (FPR); and (ii) detection lacks robustness, as adversaries can adapt to known detectors with near-perfect accuracy using minimal computational resources. To address these limitations, we propose a resynthesis framework to determine if a sample is authentic or if its authenticity can be plausibly denied. We make two key contributions focusing on the high-precision, low-recall setting against efficient (i.e., compute-restricted) adversaries. First, we demonstrate that our calibrated resynthesis method is the most reliable approach for verifying authentic samples while maintaining controllable, low FPRs. Second, we show that our method achieves adversarial robustness against efficient adversaries, whereas prior methods are easily evaded under identical compute budgets. Our approach supports multiple modalities and leverages state-of-the-art inversion techniques.
Similar Papers
A Hybrid Deep Learning and Forensic Approach for Robust Deepfake Detection
CV and Pattern Recognition
Finds fake videos by combining clues.
Pindrop it! Audio and Visual Deepfake Countermeasures for Robust Detection and Fine Grained-Localization
CV and Pattern Recognition
Finds fake videos, even small changes.
Toward Generalized Detection of Synthetic Media: Limitations, Challenges, and the Path to Multimodal Solutions
CV and Pattern Recognition
Finds fake pictures and videos made by computers.