Diffusion-Guided Adversarial Perturbation Injection for Generalizable Defense Against Facial Manipulations

Authors: Yue Li, Linying Xue, Kaiqing Lin, Hanyu Quan, Dongdong Lin, Hui Tian, Hongxia Wang, Bin Wang

Published: 2026-04-02 05:24:11+00:00

AI Summary

This paper introduces AEGIS, a novel diffusion-guided proactive defense that injects adversarial perturbations into facial images to protect against deepfake manipulation. Unlike prior L∞-bounded approaches, AEGIS injects perturbations into the latent space of a DDIM denoising trajectory, allowing adaptive amplification without pixel-level constraints. It demonstrates strong defense effectiveness and transferability across both GAN and diffusion-based deepfake generators in white-box and black-box settings, while maintaining high perceptual quality.

Abstract

Recent advances in GAN and diffusion models have significantly improved the realism and controllability of facial deepfake manipulation, raising serious concerns regarding privacy, security, and identity misuse. Proactive defenses attempt to counter this threat by injecting adversarial perturbations into images before manipulation takes place. However, existing approaches remain limited in effectiveness due to suboptimal perturbation injection strategies and are typically designed under white-box assumptions, targeting only simple GAN-based attribute editing. These constraints hinder their applicability in practical real-world scenarios. In this paper, we propose AEGIS, the first diffusion-guided paradigm in which the AdvErsarial facial images are Generated for Identity Shielding. We observe that the limited defense capability of existing approaches stems from the peak-clipping constraint, where perturbations are forcibly truncated due to a fixed $L_\\infty$-bounded. To overcome this limitation, instead of directly modifying pixels, AEGIS injects adversarial perturbations into the latent space along the DDIM denoising trajectory, thereby decoupling the perturbation magnitude from pixel-level constraints and allowing perturbations to adaptively amplify where most effective. The extensible design of AEGIS allows the defense to be expanded from purely white-box use to also support black-box scenarios through a gradient-estimation strategy. Extensive experiments across GAN and diffusion-based deepfake generators show that AEGIS consistently delivers strong defense effectiveness while maintaining high perceptual quality. In white-box settings, it achieves robust manipulation disruption, whereas in black-box settings, it demonstrates strong cross-model transferability.


Key findings
AEGIS consistently achieved top-tier defense effectiveness across various GAN and diffusion-based deepfake models in both white-box and black-box settings, while maintaining high perceptual quality of the protected images. It demonstrated robust manipulation disruption in white-box scenarios and strong cross-model transferability in black-box settings. Furthermore, AEGIS significantly reduced identity-recognizable information in adversarial images, mitigating risks of facial stigmatization and identity misuse.
Approach
AEGIS injects adversarial perturbations into the latent space of a DDIM denoising trajectory, guided by a pre-trained DDIM model. This decouples perturbation magnitude from pixel-level constraints, allowing adaptive amplification in semantically sensitive regions. It supports both white-box scenarios (using direct model gradients) and black-box scenarios (using Natural Evolution Strategies for gradient estimation).
Datasets
CelebA, FFHQ, LFW
Model(s)
UNKNOWN
Author countries
China