The Deepfake Detective: Interpreting Neural Forensics Through Sparse Features and Manifolds
By: Subramanyam Sahoo, Jared Junkin
Potential Business Impact:
Shows how fake videos are spotted.
Deepfake detection models have achieved high accuracy in identifying synthetic media, but their decision processes remain largely opaque. In this paper we present a mechanistic interpretability framework for deepfake detection applied to a vision-language model. Our approach combines a sparse autoencoder (SAE) analysis of internal network representations with a novel forensic manifold analysis that probes how the model's features respond to controlled forensic artifact manipulations. We demonstrate that only a small fraction of latent features are actively used in each layer, and that the geometric properties of the model's feature manifold, including intrinsic dimensionality, curvature, and feature selectivity, vary systematically with different types of deepfake artifacts. These insights provide a first step toward opening the "black box" of deepfake detectors, allowing us to identify which learned features correspond to specific forensic artifacts and to guide the development of more interpretable and robust models.
Similar Papers
A Hybrid Deep Learning and Forensic Approach for Robust Deepfake Detection
CV and Pattern Recognition
Finds fake videos by combining clues.
Investigating self-supervised representations for audio-visual deepfake detection
CV and Pattern Recognition
Finds fake videos by listening and watching.
Deepfake Detection Via Facial Feature Extraction and Modeling
CV and Pattern Recognition
Spots fake videos by watching faces move.