Disruptive Attacks on Face Swapping via Low-Frequency Perceptual Perturbations
By: Mengxiao Huang , Minglei Shu , Shuwang Zhou and more
Potential Business Impact:
Stops fake videos from fooling people.
Deepfake technology, driven by Generative Adversarial Networks (GANs), poses significant risks to privacy and societal security. Existing detection methods are predominantly passive, focusing on post-event analysis without preventing attacks. To address this, we propose an active defense method based on low-frequency perceptual perturbations to disrupt face swapping manipulation, reducing the performance and naturalness of generated content. Unlike prior approaches that used low-frequency perturbations to impact classification accuracy,our method directly targets the generative process of deepfake techniques. We combine frequency and spatial domain features to strengthen defenses. By introducing artifacts through low-frequency perturbations while preserving high-frequency details, we ensure the output remains visually plausible. Additionally, we design a complete architecture featuring an encoder, a perturbation generator, and a decoder, leveraging discrete wavelet transform (DWT) to extract low-frequency components and generate perturbations that disrupt facial manipulation models. Experiments on CelebA-HQ and LFW demonstrate significant reductions in face-swapping effectiveness, improved defense success rates, and preservation of visual quality.
Similar Papers
Enhanced Deep Learning DeepFake Detection Integrating Handcrafted Features
CV and Pattern Recognition
Catches fake faces in pictures and videos.
Defending Deepfake via Texture Feature Perturbation
CV and Pattern Recognition
Stops fake videos from fooling people.
Realism to Deception: Investigating Deepfake Detectors Against Face Enhancement
CV and Pattern Recognition
Makes fake faces harder to spot.