Cross-Domain Audio Deepfake Detection: Dataset and Analysis

Authors: Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang

Published: 2024-04-07 10:10:15+00:00

AI Summary

This paper addresses the issue of outdated datasets in audio deepfake detection (ADD) by constructing a new cross-domain ADD dataset (CD-ADD) comprising over 300 hours of speech generated by five advanced zero-shot TTS models. It demonstrates that pre-trained speech encoders, like Wav2Vec2-large and Whisper-medium, achieve strong detection performance through novel attack-augmented training and exhibit outstanding few-shot ADD ability, though neural codec compression remains a significant challenge.

Abstract

Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy. Recent zero-shot text-to-speech (TTS) models pose higher risks as they can clone voices with a single utterance. However, the existing ADD datasets are outdated, leading to suboptimal generalization of detection models. In this paper, we construct a new cross-domain ADD dataset comprising over 300 hours of speech data that is generated by five advanced zero-shot TTS models. To simulate real-world scenarios, we employ diverse attack methods and audio prompts from different datasets. Experiments show that, through novel attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve equal error rates of 4.1\\% and 6.5\\% respectively. Additionally, we demonstrate our models' outstanding few-shot ADD ability by fine-tuning with just one minute of target-domain data. Nonetheless, neural codec compressors greatly affect the detection accuracy, necessitating further research.


Key findings
The cross-domain ADD task is challenging, and attack-augmented training significantly improves model adaptability and resilience. Wav2Vec2-large and Whisper-medium achieved equal error rates of 4.1% and 6.5% respectively with this training. The models also demonstrated outstanding few-shot learning capabilities, adapting well with minimal target-domain data, but detection accuracy is greatly affected by neural codec compressors.
Approach
The authors constructed a new cross-domain ADD dataset using five advanced zero-shot TTS models and diverse audio prompts, employing nine different attack methods including DNN-based codecs and noise reduction. For detection, they fine-tune pre-trained speech encoders (Wav2Vec2 and Whisper) with a classifier head, utilizing an attack-augmented training strategy and demonstrating few-shot fine-tuning for adaptation to new TTS systems.
Datasets
CD-ADD (newly constructed, based on LibriTTS and TEDLium3 for prompts), ASVSpoof2019
Model(s)
Wav2Vec2-base, Wav2Vec2-large, Whisper-medium
Author countries
China