FakeMark: Deepfake Speech Attribution With Watermarked Artifacts

Authors: Wanying Ge, Xin Wang, Junichi Yamagishi

Published: 2025-10-14 00:56:44+00:00

AI Summary

FakeMark is a novel watermarking framework for deepfake speech attribution that injects artifact-correlated watermarks associated with deepfake systems, rather than pre-assigned bitstring messages. This design allows a detector to attribute the source system by leveraging both injected watermarks and intrinsic deepfake artifacts, maintaining effectiveness even when one cue is elusive or removed. Experimental results demonstrate improved generalization to cross-dataset samples and high accuracy under various distortions and removal attacks.

Abstract

Deepfake speech attribution remains challenging for existing solutions. Classifier-based solutions often fail to generalize to domain-shifted samples, and watermarking-based solutions are easily compromised by distortions like codec compression or malicious removal attacks. To address these issues, we propose FakeMark, a novel watermarking framework that injects artifact-correlated watermarks associated with deepfake systems rather than pre-assigned bitstring messages. This design allows a detector to attribute the source system by leveraging both injected watermark and intrinsic deepfake artifacts, remaining effective even if one of these cues is elusive or removed. Experimental results show that FakeMark improves generalization to cross-dataset samples where classifier-based solutions struggle and maintains high accuracy under various distortions where conventional watermarking-based solutions fail.


Key findings
FakeMark significantly improves generalization to cross-dataset samples where classifier-based solutions fail, and maintains high attribution accuracy under diverse distortions (e.g., neural codecs, vocoders) and watermark removal attacks. While achieving strong robustness, there is an observed trade-off between watermark robustness and the perceptual quality of the watermarked speech signals.
Approach
FakeMark addresses deepfake speech attribution by injecting watermarks that are correlated with acoustic artifacts specific to deepfake systems. The detector then uses a dual approach, relying on both these injected watermarks and the intrinsic deepfake artifacts for source attribution. This ensures robust performance even when faced with distortions or watermark removal attempts.
Datasets
MLAAD v5, ASVspoof5, TIMIT-TTS
Model(s)
FakeMarkA (encoder-decoder based on AudioSeal), FakeMarkT (spectrogram-based encoder based on Timbre), Detector (pre-trained SSL front-end - MMS-300M, and fully connected back-end classifier)
Author countries
Japan