LoRA Patching: Exposing the Fragility of Proactive Defenses against Deepfakes
Authors: Zuomin Qu, Yimao Guo, Qianyue Hu, Wei Lu
Published: 2025-10-04 09:22:26+00:00
AI Summary
This paper reveals the fragility of proactive Deepfake defenses, which rely on embedding adversarial perturbations in facial images to prevent manipulation. The authors propose LoRA Patching, a novel attack approach that injects Low-Rank Adaptation (LoRA) patches into Deepfake generators to bypass these defenses efficiently. The method includes a learnable gating mechanism and a Multi-Modal Feature Alignment (MMFA) loss to stabilize training and maintain high-quality adversarial outputs.
Abstract
Deepfakes pose significant societal risks, motivating the development of proactive defenses that embed adversarial perturbations in facial images to prevent manipulation. However, in this paper, we show that these preemptive defenses often lack robustness and reliability. We propose a novel approach, Low-Rank Adaptation (LoRA) patching, which injects a plug-and-play LoRA patch into Deepfake generators to bypass state-of-the-art defenses. A learnable gating mechanism adaptively controls the effect of the LoRA patch and prevents gradient explosions during fine-tuning. We also introduce a Multi-Modal Feature Alignment (MMFA) loss, encouraging the features of adversarial outputs to align with those of the desired outputs at the semantic level. Beyond bypassing, we present defensive LoRA patching, embedding visible warnings in the outputs as a complementary solution to mitigate this newly identified security vulnerability. With only 1,000 facial examples and a single epoch of fine-tuning, LoRA patching successfully defeats multiple proactive defenses. These results reveal a critical weakness in current paradigms and underscore the need for more robust Deepfake defense strategies. Our code is available at https://github.com/ZOMIN28/LoRA-Patching.