Cross-Domain Audio Deepfake Detection: Dataset and Analysis
Authors: Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang
Published: 2024-04-07 10:10:15+00:00
AI Summary
This paper addresses the issue of outdated datasets in audio deepfake detection (ADD) by constructing a new cross-domain ADD dataset (CD-ADD) comprising over 300 hours of speech generated by five advanced zero-shot TTS models. It demonstrates that pre-trained speech encoders, like Wav2Vec2-large and Whisper-medium, achieve strong detection performance through novel attack-augmented training and exhibit outstanding few-shot ADD ability, though neural codec compression remains a significant challenge.
Abstract
Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy. Recent zero-shot text-to-speech (TTS) models pose higher risks as they can clone voices with a single utterance. However, the existing ADD datasets are outdated, leading to suboptimal generalization of detection models. In this paper, we construct a new cross-domain ADD dataset comprising over 300 hours of speech data that is generated by five advanced zero-shot TTS models. To simulate real-world scenarios, we employ diverse attack methods and audio prompts from different datasets. Experiments show that, through novel attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve equal error rates of 4.1\\% and 6.5\\% respectively. Additionally, we demonstrate our models' outstanding few-shot ADD ability by fine-tuning with just one minute of target-domain data. Nonetheless, neural codec compressors greatly affect the detection accuracy, necessitating further research.