FaceSwapGuard: Safeguarding Facial Privacy from DeepFake Threats through Identity Obfuscation
Authors: Li Wang, Zheng Li, Xuhong Zhang, Shouling Ji, Shanqing Guo
Published: 2025-02-15 13:45:19+00:00
AI Summary
This paper introduces FaceSwapGuard (FSG), a novel black-box defense mechanism designed to protect facial privacy against deepfake face-swapping threats. FSG subtly perturbs a user's facial image, disrupting features extracted by identity encoders to mislead face-swapping techniques. This results in generated images having identities significantly different from the original user, effectively reducing the face match rate from over 90% to below 10%.
Abstract
DeepFakes pose a significant threat to our society. One representative DeepFake application is face-swapping, which replaces the identity in a facial image with that of a victim. Although existing methods partially mitigate these risks by degrading the quality of swapped images, they often fail to disrupt the identity transformation effectively. To fill this gap, we propose FaceSwapGuard (FSG), a novel black-box defense mechanism against deepfake face-swapping threats. Specifically, FSG introduces imperceptible perturbations to a user's facial image, disrupting the features extracted by identity encoders. When shared online, these perturbed images mislead face-swapping techniques, causing them to generate facial images with identities significantly different from the original user. Extensive experiments demonstrate the effectiveness of FSG against multiple face-swapping techniques, reducing the face match rate from 90\\% (without defense) to below 10\\%. Both qualitative and quantitative studies further confirm its ability to confuse human perception, highlighting its practical utility. Additionally, we investigate key factors that may influence FSG and evaluate its robustness against various adaptive adversaries.