LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks
Authors: Tianyi Wang, Mengxiao Huang, Harry Cheng, Xiao Zhang, Zhiqi Shen
Published: 2024-11-26 08:24:56+00:00
Comment: Accepted to ACM MM 2024
AI Summary
This paper introduces LampMark, a proactive Deepfake detection method that uses training-free landmark perceptual watermarks. It analyzes the structure-sensitive nature of Deepfake manipulations to transform facial landmarks into secure binary watermarks. An end-to-end watermarking framework then robustly embeds and extracts these watermarks, enabling Deepfake detection by assessing the consistency between content-matched and recovered watermarks.
Abstract
Deepfake facial manipulation has garnered significant public attention due to its impacts on enhancing human experiences and posing privacy threats. Despite numerous passive algorithms that have been attempted to thwart malicious Deepfake attacks, they mostly struggle with the generalizability challenge when confronted with hyper-realistic synthetic facial images. To tackle the problem, this paper proposes a proactive Deepfake detection approach by introducing a novel training-free landmark perceptual watermark, LampMark for short. We first analyze the structure-sensitive characteristics of Deepfake manipulations and devise a secure and confidential transformation pipeline from the structural representations, i.e. facial landmarks, to binary landmark perceptual watermarks. Subsequently, we present an end-to-end watermarking framework that imperceptibly and robustly embeds and extracts watermarks concerning the images to be protected. Relying on promising watermark recovery accuracies, Deepfake detection is accomplished by assessing the consistency between the content-matched landmark perceptual watermark and the robustly recovered watermark of the suspect image. Experimental results demonstrate the superior performance of our approach in watermark recovery and Deepfake detection compared to state-of-the-art methods across in-dataset, cross-dataset, and cross-manipulation scenarios.