A General Model for Deepfake Speech Detection: Diverse Bonafide Resources or Diverse AI-Based Generators
Authors: Lam Pham, Khoi Vu, Dat Tran, David Fischinger, Simon Freitter, Marcel Hasenbalg, Davide Antonutti, Alexander Schindler, Martin Boyer, Ian McLoughlin
Published: 2026-03-29 07:43:47+00:00
AI Summary
This paper analyzes the impact of Bonafide Resource (BR) and AI-based Generator (AG) balance on the performance and generality of Deepfake Speech Detection (DSD) models. The authors propose a deep-learning baseline and, based on experimental findings, introduce a balanced DSD dataset (BR-AG dataset) by re-using public resources. Training various models on this proposed dataset, cross-dataset evaluations demonstrate that a balanced BR and AG distribution is the key factor for achieving a generalizable DSD model.
Abstract
In this paper, we analyze two main factors of Bonafide Resource (BR) or AI-based Generator (AG) which affect the performance and the generality of a Deepfake Speech Detection (DSD) model. To this end, we first propose a deep-learning based model, referred to as the baseline. Then, we conducted experiments on the baseline by which we indicate how Bonafide Resource (BR) and AI-based Generator (AG) factors affect the threshold score used to detect fake or bonafide input audio in the inference process. Given the experimental results, a dataset, which re-uses public Deepfake Speech Detection (DSD) datasets and shows a balance between Bonafide Resource (BR) or AI-based Generator (AG), is proposed. We then train various deep-learning based models on the proposed dataset and conduct cross-dataset evaluation on different benchmark datasets. The cross-dataset evaluation results prove that the balance of Bonafide Resources (BR) and AI-based Generators (AG) is the key factor to train and achieve a general Deepfake Speech Detection (DSD) model.