Score: 2

The Deepfake Detective: Interpreting Neural Forensics Through Sparse Features and Manifolds

Published: December 25, 2025 | arXiv ID: 2512.21670v1

By: Subramanyam Sahoo, Jared Junkin

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Shows how fake videos are spotted.

Business Areas:
Image Recognition Data and Analytics, Software

Deepfake detection models have achieved high accuracy in identifying synthetic media, but their decision processes remain largely opaque. In this paper we present a mechanistic interpretability framework for deepfake detection applied to a vision-language model. Our approach combines a sparse autoencoder (SAE) analysis of internal network representations with a novel forensic manifold analysis that probes how the model's features respond to controlled forensic artifact manipulations. We demonstrate that only a small fraction of latent features are actively used in each layer, and that the geometric properties of the model's feature manifold, including intrinsic dimensionality, curvature, and feature selectivity, vary systematically with different types of deepfake artifacts. These insights provide a first step toward opening the "black box" of deepfake detectors, allowing us to identify which learned features correspond to specific forensic artifacts and to guide the development of more interpretable and robust models.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition