A General Model for Deepfake Speech Detection: Diverse Bonafide Resources or Diverse AI-Based Generators

Authors: Lam Pham, Khoi Vu, Dat Tran, David Fischinger, Simon Freitter, Marcel Hasenbalg, Davide Antonutti, Alexander Schindler, Martin Boyer, Ian McLoughlin

Published: 2026-03-29 07:43:47+00:00

AI Summary

This paper analyzes the impact of Bonafide Resource (BR) and AI-based Generator (AG) balance on the performance and generality of Deepfake Speech Detection (DSD) models. The authors propose a deep-learning baseline and, based on experimental findings, introduce a balanced DSD dataset (BR-AG dataset) by re-using public resources. Training various models on this proposed dataset, cross-dataset evaluations demonstrate that a balanced BR and AG distribution is the key factor for achieving a generalizable DSD model.

Abstract

In this paper, we analyze two main factors of Bonafide Resource (BR) or AI-based Generator (AG) which affect the performance and the generality of a Deepfake Speech Detection (DSD) model. To this end, we first propose a deep-learning based model, referred to as the baseline. Then, we conducted experiments on the baseline by which we indicate how Bonafide Resource (BR) and AI-based Generator (AG) factors affect the threshold score used to detect fake or bonafide input audio in the inference process. Given the experimental results, a dataset, which re-uses public Deepfake Speech Detection (DSD) datasets and shows a balance between Bonafide Resource (BR) or AI-based Generator (AG), is proposed. We then train various deep-learning based models on the proposed dataset and conduct cross-dataset evaluation on different benchmark datasets. The cross-dataset evaluation results prove that the balance of Bonafide Resources (BR) and AI-based Generators (AG) is the key factor to train and achieve a general Deepfake Speech Detection (DSD) model.


Key findings
The study demonstrates that the balance between Bonafide Resources and AI-based Generators in training data is crucial for the generality of a Deepfake Speech Detection model and the consistency of its detection threshold. Training models on the proposed BR-AG dataset, which balances these factors, resulted in robust DSD models. These models achieved high accuracy (ranging from 0.87 to 0.99), F1, and AuC scores across diverse unseen benchmark datasets using a fixed probability threshold of 0.5.
Approach
The authors first establish a deep-learning baseline model using a pre-trained WavLM and MLP to investigate how Bonafide Resource (BR) and AI-based Generator (AG) factors influence detection thresholds. Based on these findings, they propose a new BR-AG dataset, combining existing DSD datasets to achieve a balanced representation of BR and AG. Various deep-learning models (e.g., finetuned WavLM-Large) are then trained on this BR-AG dataset using a three-stage strategy involving multiple loss functions for improved feature separation, followed by cross-dataset evaluation with a fixed threshold to prove generality.
Datasets
ASVspoof 2019 (LA), ASVspoof 2021 (LA), ASVspoof 2021 (DF), ASVspoof 2024 (train and development subsets), Librispeech, In-The-Wild, ASVspoof5 Evaluation Subset, FakeAVCeleb. (These datasets are used to form the AG, BR, and proposed BR-AG training datasets, and for testing.)
Model(s)
WavLM (specifically WavLM-Large, used frozen and finetuned), Multilayer Perceptron (MLP), Whisper-base, Wave2Vec2.0-XLSR-53 (Wave2XLSR).
Author countries
Austria, Vietnam, Singapore