DIN-CTS: Low-Complexity Depthwise-Inception Neural Network with Contrastive Training Strategy for Deepfake Speech Detection

Authors: Lam Pham, Dat Tran, Phat Lam, Florian Skopik, Alexander Schindler, Silvia Poletti, David Fischinger, Martin Boyer

Published: 2025-02-27 16:09:04+00:00

AI Summary

This paper proposes DIN-CTS, a low-complexity Depthwise-Inception Neural Network (DIN) with a Contrastive Training Strategy (CTS) for deepfake speech detection (DSD). The approach transforms audio into spectrograms, trains the DIN using a three-stage contrastive method, and detects deepfakes by comparing test utterance embeddings to a learned Gaussian distribution of genuine speech via Mahalanobis distance. It achieves high performance on ASVspoof 2019 LA with significantly fewer parameters than traditional methods.

Abstract

In this paper, we propose a deep neural network approach for deepfake speech detection (DSD) based on a lowcomplexity Depthwise-Inception Network (DIN) trained with a contrastive training strategy (CTS). In this framework, input audio recordings are first transformed into spectrograms using Short-Time Fourier Transform (STFT) and Linear Filter (LF), which are then used to train the DIN. Once trained, the DIN processes bonafide utterances to extract audio embeddings, which are used to construct a Gaussian distribution representing genuine speech. Deepfake detection is then performed by computing the distance between a test utterance and this distribution to determine whether the utterance is fake or bonafide. To evaluate our proposed systems, we conducted extensive experiments on the benchmark dataset of ASVspoof 2019 LA. The experimental results demonstrate the effectiveness of combining the Depthwise-Inception Network with the contrastive learning strategy in distinguishing between fake and bonafide utterances. We achieved Equal Error Rate (EER), Accuracy (Acc.), F1, AUC scores of 4.6%, 95.4%, 97.3%, and 98.9% respectively using a single, low-complexity DIN with just 1.77 M parameters and 985 M FLOPS on short audio segments (4 seconds). Furthermore, our proposed system outperforms the single-system submissions in the ASVspoof 2019 LA challenge, showcasing its potential for real-time applications.


Key findings
The DIN-CTS system achieved an EER of 4.6%, Accuracy of 95.4%, F1 score of 97.3%, and AUC of 98.9% on ASVspoof 2019 LA. It demonstrated superior performance compared to a ResNet18 baseline and single-system submissions in the ASVspoof 2019 LA challenge, all while maintaining low complexity (1.77 M parameters, 985 M FLOPS).
Approach
The input audio is converted into STFT-LF spectrograms, which are then fed into a Depthwise-Inception Network (DIN). This DIN is trained using a three-stage Contrastive Training Strategy (CTS) involving multiple loss functions (A-Softmax, contrastive, and bonafide distribution variance minimization) and fine-tuning. Deepfake detection is performed by computing the Mahalanobis distance between the embedding of a test utterance and a learned Gaussian distribution representing bonafide speech.
Datasets
ASVspoof 2019 LA
Model(s)
Depthwise-Inception Network (DIN) with Depthwise and Inception convolution layers. ResNet18 was used as a baseline for comparison.
Author countries
Austria, Vietnam