Audio Deepfake Detection with Self-Supervised WavLM and Multi-Fusion Attentive Classifier

Authors: Yinlin Guo, Haofan Huang, Xi Chen, He Zhao, Yuehai Wang

Published: 2023-12-13 12:09:15+00:00

Comment: Accepted to ICASSP 2024. 5 pages, 1 figure

AI Summary

This paper introduces a novel approach for audio deepfake detection by combining a self-supervised WavLM model with a Multi-Fusion Attentive (MFA) classifier. The method leverages WavLM for extracting features highly conducive to spoofing detection and proposes the MFA classifier, based on Attentive Statistics Pooling (ASP), to capture complementary information across different time steps and layers of audio features. Experiments demonstrate that this approach achieves state-of-the-art results on the ASVspoof 2021 DF set and competitive performance on the ASVspoof 2019 and 2021 LA sets.

Abstract

With the rapid development of speech synthesis and voice conversion technologies, Audio Deepfake has become a serious threat to the Automatic Speaker Verification (ASV) system. Numerous countermeasures are proposed to detect this type of attack. In this paper, we report our efforts to combine the self-supervised WavLM model and Multi-Fusion Attentive classifier for audio deepfake detection. Our method exploits the WavLM model to extract features that are more conducive to spoofing detection for the first time. Then, we propose a novel Multi-Fusion Attentive (MFA) classifier based on the Attentive Statistics Pooling (ASP) layer. The MFA captures the complementary information of audio features at both time and layer levels. Experiments demonstrate that our methods achieve state-of-the-art results on the ASVspoof 2021 DF set and provide competitive results on the ASVspoof 2019 and 2021 LA set.


Key findings
The proposed method achieved state-of-the-art performance on the ASVspoof 2021 DF evaluation set, reporting the lowest EER. It also demonstrated competitive results on the ASVspoof 2019 LA and ASVspoof 2021 LA evaluation sets. Ablation studies confirmed that WavLM's features are more effective for deepfake detection than Wav2vec2, and the ASP-based MFA classifier significantly improves performance by effectively leveraging multi-layer and time-level feature information.
Approach
The proposed method uses a self-supervised WavLM model as a front-end feature extractor to obtain multi-layer feature embeddings from raw audio waveforms. These embeddings are then processed by a novel Multi-Fusion Attentive (MFA) classifier, which is built upon Attentive Statistics Pooling (ASP) layers. The MFA classifier aggregates information from different WavLM layers and time steps to capture complementary features, facilitating the extraction of highly discriminative features for deepfake detection.
Datasets
ASVspoof 2019 LA (train and dev sets), ASVspoof 2021 LA (evaluation set), ASVspoof 2021 DF (evaluation set)
Model(s)
WavLM (WavLM Base, WavLM Large), Multi-Fusion Attentive (MFA) classifier, Attentive Statistics Pooling (ASP) layer
Author countries
China