FakeMark: Deepfake Speech Attribution With Watermarked Artifacts

Authors: Wanying Ge, Xin Wang, Junichi Yamagishi

Published: 2025-10-14 00:56:44+00:00

AI Summary

FakeMark is a novel watermarking framework designed for robust deepfake speech attribution, addressing the weaknesses of conventional classifier and watermarking solutions. It injects watermarks that are correlated with intrinsic acoustic artifacts associated with specific deepfake systems, enabling the detector to leverage both cues for source identification. This design significantly improves generalization to domain-shifted samples and maintains high accuracy under various distortions and malicious removal attacks.

Abstract

Deepfake speech attribution remains challenging for existing solutions. Classifier-based solutions often fail to generalize to domain-shifted samples, and watermarking-based solutions are easily compromised by distortions like codec compression or malicious removal attacks. To address these issues, we propose FakeMark, a novel watermarking framework that injects artifact-correlated watermarks associated with deepfake systems rather than pre-assigned bitstring messages. This design allows a detector to attribute the source system by leveraging both injected watermark and intrinsic deepfake artifacts, remaining effective even if one of these cues is elusive or removed. Experimental results show that FakeMark improves generalization to cross-dataset samples where classifier-based solutions struggle and maintains high accuracy under various distortions where conventional watermarking-based solutions fail.


Key findings
FakeMark achieved near-perfect attribution accuracy (1.00) on cross-dataset samples (ASVspoof5 + TIMIT-TTS) where classifier baselines failed dramatically (accuracy below 0.12). The framework proved highly robust against strong distortions (neural codecs and vocoders) and removal attacks, substantially outperforming conventional watermarking baselines. A trade-off exists, where the spectrogram-based FakeMarkT often showed superior attribution robustness under distortion, while the waveform-based FakeMarkA maintained higher objective speech quality metrics (SI-SNR, PESQ).
Approach
FakeMark utilizes a watermarking framework where the generator injects artifact-correlated watermarks corresponding to the source system label. The detector is jointly trained using attribution loss (on clean signals) and watermark detection loss (on watermarked signals), allowing it to attribute the source system by relying on either the injected watermark or the inherent deepfake artifacts present in the signal. The generation process can operate either on waveforms (FakeMarkA) or spectrogram features (FakeMarkT).
Datasets
MLAAD v5, ASVspoof5, TIMIT-TTS
Model(s)
FakeMarkA, FakeMarkT, AudioSeal (as architecture basis), Timbre (as architecture basis), MMS-300M (pre-trained SSL model), ResNet34
Author countries
Japan