Multi-level SSL Feature Gating for Audio Deepfake Detection

Authors: Hoan My Tran, Damien Lolive, Aghilas Sini, Arnaud Delhay, Pierre-François Marteau, David Guennec

Published: 2025-09-03 15:37:52+00:00

AI Summary

This paper proposes a novel audio deepfake detection approach using a multi-level self-supervised learning (SSL) feature gating mechanism. It combines a gating mechanism to extract relevant features from the XLS-R model with a MultiConv classifier to capture speech artifacts and Centered Kernel Alignment (CKA) to enhance feature diversity, achieving state-of-the-art performance and robust generalization to out-of-domain datasets.

Abstract

Recent advancements in generative AI, particularly in speech synthesis, have enabled the generation of highly natural-sounding synthetic speech that closely mimics human voices. While these innovations hold promise for applications like assistive technologies, they also pose significant risks, including misuse for fraudulent activities, identity theft, and security threats. Current research on spoofing detection countermeasures remains limited by generalization to unseen deepfake attacks and languages. To address this, we propose a gating mechanism extracting relevant feature from the speech foundation XLS-R model as a front-end feature extractor. For downstream back-end classifier, we employ Multi-kernel gated Convolution (MultiConv) to capture both local and global speech artifacts. Additionally, we introduce Centered Kernel Alignment (CKA) as a similarity metric to enforce diversity in learned features across different MultiConv layers. By integrating CKA with our gating mechanism, we hypothesize that each component helps improving the learning of distinct synthetic speech patterns. Experimental results demonstrate that our approach achieves state-of-the-art performance on in-domain benchmarks while generalizing robustly to out-of-domain datasets, including multilingual speech samples. This underscores its potential as a versatile solution for detecting evolving speech deepfake threats.


Key findings
The proposed method achieves state-of-the-art performance on in-domain benchmarks (19LA and 21DF) and shows strong generalization to diverse out-of-domain datasets, including multilingual speech. The use of CKA significantly improves robustness and cross-domain generalization. The model struggles with certain neural autoregressive vocoders and some attacks in the 21LA dataset.
Approach
The approach uses the XLS-R model for feature extraction, employing a SwiGLU gating mechanism to aggregate hidden features. A MultiConv classifier with multiple convolutional kernels and a gating mechanism is used as the back-end, and CKA is integrated as a loss function to promote feature diversity across MultiConv layers.
Datasets
ASVspoof 2019 Logical Access (19LA) for training, and ASVspoof 2021 Logical Access (21LA), ASVspoof 2021 DeepFake (21DF), Fake or Real (FoR), In-The-Wild (ITW), Diffusion and Flow-matching-based Audio Deepfake Dataset (DFADD), LibriSeVoc, DEepfake CROss-lingual (DECRO) English (D-EN) and Chinese (D-CH), Multi-Language Audio Anti-Spoof (MLAAD) (including English, French, German, Spanish, Italian, Polish, Russian, and Ukrainian), Audio Deepfake Detection 2023 (ADD23) Track 1.2 Round 1 and Round 2, and HABLA.
Model(s)
XLS-R (front-end feature extractor), MultiConv (back-end classifier), MLP (for final classification)
Author countries
France