FaceSwapGuard: Safeguarding Facial Privacy from DeepFake Threats through Identity Obfuscation

Authors: Li Wang, Zheng Li, Xuhong Zhang, Shouling Ji, Shanqing Guo

Published: 2025-02-15 13:45:19+00:00

AI Summary

This paper introduces FaceSwapGuard (FSG), a novel black-box defense mechanism designed to protect facial privacy against deepfake face-swapping threats. FSG subtly perturbs a user's facial image, disrupting features extracted by identity encoders to mislead face-swapping techniques. This results in generated images having identities significantly different from the original user, effectively reducing the face match rate from over 90% to below 10%.

Abstract

DeepFakes pose a significant threat to our society. One representative DeepFake application is face-swapping, which replaces the identity in a facial image with that of a victim. Although existing methods partially mitigate these risks by degrading the quality of swapped images, they often fail to disrupt the identity transformation effectively. To fill this gap, we propose FaceSwapGuard (FSG), a novel black-box defense mechanism against deepfake face-swapping threats. Specifically, FSG introduces imperceptible perturbations to a user's facial image, disrupting the features extracted by identity encoders. When shared online, these perturbed images mislead face-swapping techniques, causing them to generate facial images with identities significantly different from the original user. Extensive experiments demonstrate the effectiveness of FSG against multiple face-swapping techniques, reducing the face match rate from 90\\% (without defense) to below 10\\%. Both qualitative and quantitative studies further confirm its ability to confuse human perception, highlighting its practical utility. Additionally, we investigate key factors that may influence FSG and evaluate its robustness against various adaptive adversaries.


Key findings
FSG demonstrated high effectiveness, significantly reducing the face match rate of deepfake-generated images from over 90% to below 10% on various academic models and commercial face verification APIs. Both quantitative and qualitative analyses confirmed its ability to confuse human perception regarding the original identity. The protected images also exhibited robustness against adaptive adversaries employing image denoising techniques like Gaussian blur and compression.
Approach
FaceSwapGuard (FSG) operates as a black-box defense by introducing imperceptible adversarial perturbations to a user's facial image prior to online sharing. It achieves this by maximizing the deviation of identity features extracted by a surrogate identity encoder (e.g., FaceNet or ArcFace) and utilizing intermediate feature map loss. Random image transformations are also incorporated during the optimization process to enhance transferability and robustness across various face-swapping models.
Datasets
CelebA-HQ
Model(s)
UNKNOWN
Author countries
China