Does Audio Deepfake Detection Generalize?

Authors: Nicolas M. Müller, Pavel Czempin, Franziska Dieckmann, Adam Froghyar, Konstantin Böttinger

Published: 2022-03-30 12:48:22+00:00

Comment: Interspeech 2022

AI Summary

This paper systematizes audio deepfake detection by uniformly re-implementing and evaluating twelve existing architectures, identifying key factors like feature type (cqtspec/logspec over melspec) for success. They introduce a new 'in-the-wild' dataset of celebrity/politician deepfakes and authentic audio to assess generalization. The study reveals that current models perform poorly on this real-world data, indicating they are likely over-optimized for the ASVSpoof benchmark.

Abstract

Current text-to-speech algorithms produce realistic fakes of human voices, making deepfake detection a much-needed area of research. While researchers have presented various techniques for detecting audio spoofs, it is often unclear exactly why these architectures are successful: Preprocessing steps, hyperparameter settings, and the degree of fine-tuning are not consistent across related work. Which factors contribute to success, and which are accidental? In this work, we address this problem: We systematize audio spoofing detection by re-implementing and uniformly evaluating architectures from related work. We identify overarching features for successful audio deepfake detection, such as using cqtspec or logspec features instead of melspec features, which improves performance by 37% EER on average, all other factors constant. Additionally, we evaluate generalization capabilities: We collect and publish a new dataset consisting of 37.9 hours of found audio recordings of celebrities and politicians, of which 17.2 hours are deepfakes. We find that related work performs poorly on such real-world data (performance degradation of up to one thousand percent). This may suggest that the community has tailored its solutions too closely to the prevailing ASVSpoof benchmark and that deepfakes are much harder to detect outside the lab than previously thought.


Key findings
Using cqtspec or logspec features instead of melspec improves performance by 37% EER on average. Full-length audio inputs significantly outperform fixed 4-second inputs. Crucially, models trained on ASVSpoof 2019 show severe performance degradation (up to 1000% EER increase) on the real-world 'in-the-wild' dataset, indicating poor generalization and over-tailoring to the benchmark.
Approach
The authors re-implement and uniformly evaluate twelve audio spoof detection architectures, systematically varying feature extraction techniques (cqtspec, logspec, melspec, raw) and input audio lengths (full vs. 4s). They introduce a novel 'in-the-wild' dataset to assess the generalization capabilities of these models on realistic, unseen deepfakes.
Datasets
ASVspoof 2019 (Logical Access part, LA), 'in-the-wild' dataset (newly collected, 37.9 hours of celebrity/politician audio, 17.2 hours deepfakes).
Model(s)
LSTM, LCNN, LCNN-Attention, LCNN-LSTM, MesoNet, MesoInception, ResNet18, Transformer, CRNNSpoof, RawNet2, RawPC, RawGAT-ST.
Author countries
Germany