DeepFense: A Unified, Modular, and Extensible Framework for Robust Deepfake Audio Detection
Authors: Yassine El Kheir, Arnab Das, Yixuan Xiao, Xin Wang, Feidi Kallel, Enes Erdem Erdogan, Ngoc Thang Vu, Tim Polzehl, Sebastian Moeller
Published: 2026-04-09 16:47:18+00:00
Comment: Deepfense Toolkit
AI Summary
This paper introduces DeepFense, a comprehensive, open-source PyTorch toolkit designed for robust speech deepfake detection, integrating state-of-the-art architectures, loss functions, and augmentation pipelines. Through a large-scale evaluation of over 400 models, the authors demonstrate that the choice of pre-trained front-end feature extractor significantly impacts performance, and that high-performing models often exhibit severe biases regarding audio quality, speaker gender, and language. DeepFense aims to facilitate reproducible research and address challenges in real-world deployment by providing tools for equitable training data selection and front-end fine-tuning.
Abstract
Speech deepfake detection is a well-established research field with different models, datasets, and training strategies. However, the lack of standardized implementations and evaluation protocols limits reproducibility, benchmarking, and comparison across studies. In this work, we present DeepFense, a comprehensive, open-source PyTorch toolkit integrating the latest architectures, loss functions, and augmentation pipelines, alongside over 100 recipes. Using DeepFense, we conducted a large-scale evaluation of more than 400 models. Our findings reveal that while carefully curated training data improves cross-domain generalization, the choice of pre-trained front-end feature extractor dominates overall performance variance. Crucially, we show severe biases in high-performing models regarding audio quality, speaker gender, and language. DeepFense is expected to facilitate real-world deployment with the necessary tools to address equitable training data selection and front-end fine-tuning.