Defending Deepfake via Texture Feature Perturbation
By: Xiao Zhang, Changfang Chen, Tianyi Wang
Potential Business Impact:
Stops fake videos from fooling people.
The rapid development of Deepfake technology poses severe challenges to social trust and information security. While most existing detection methods primarily rely on passive analyses, due to unresolvable high-quality Deepfake contents, proactive defense has recently emerged by inserting invisible signals in advance of image editing. In this paper, we introduce a proactive Deepfake detection approach based on facial texture features. Since human eyes are more sensitive to perturbations in smooth regions, we invisibly insert perturbations within texture regions that have low perceptual saliency, applying localized perturbations to key texture regions while minimizing unwanted noise in non-textured areas. Our texture-guided perturbation framework first extracts preliminary texture features via Local Binary Patterns (LBP), and then introduces a dual-model attention strategy to generate and optimize texture perturbations. Experiments on CelebA-HQ and LFW datasets demonstrate the promising performance of our method in distorting Deepfake generation and producing obvious visual defects under multiple attack models, providing an efficient and scalable solution for proactive Deepfake detection.
Similar Papers
Disruptive Attacks on Face Swapping via Low-Frequency Perceptual Perturbations
CV and Pattern Recognition
Stops fake videos from fooling people.
Example-Based Feature Painting on Textures
CV and Pattern Recognition
Creates realistic textures with damage and wear.
Realism to Deception: Investigating Deepfake Detectors Against Face Enhancement
CV and Pattern Recognition
Makes fake faces harder to spot.