WST-X Series: Wavelet Scattering Transform for Interpretable Speech Deepfake Detection

Authors: Xi Xuan, Davide Carbone, Ruchi Pandey, Wenxin Zhang, Tomi H. Kinnunen

Published: 2026-02-03 01:39:28+00:00

Comment: Submitted to IEEE Signal Processing Letters

AI Summary

This paper introduces the WST-X series, a novel family of feature extractors for interpretable speech deepfake detection that combines the wavelet scattering transform (WST) with self-supervised learning (SSL) features. By investigating 1D and 2D WSTs, the approach aims to capture both acoustic details and higher-order structural anomalies. Experimental results on the Deepfake-Eval-2024 dataset demonstrate that WST-X significantly outperforms existing front-ends, emphasizing the importance of specific WST parameters for detecting subtle artifacts.

Abstract

Designing front-ends for speech deepfake detectors primarily focuses on two categories. Hand-crafted filterbank features are transparent but are limited in capturing high-level semantic details, often resulting in performance gaps compared to self-supervised (SSL) features. SSL features, in turn, lack interpretability and may overlook fine-grained spectral anomalies. We propose the WST-X series, a novel family of feature extractors that combines the best of both worlds via the wavelet scattering transform (WST), integrating wavelets with nonlinearities analogous to deep convolutional networks. We investigate 1D and 2D WSTs to extract acoustic details and higher-order structural anomalies, respectively. Experimental results on the recent and challenging Deepfake-Eval-2024 dataset indicate that WST-X outperforms existing front-ends by a wide margin. Our analysis reveals that a small averaging scale ($J$), combined with high-frequency and directional resolutions ($Q, L$), is critical for capturing subtle artifacts. This underscores the value of translation-invariant and deformation-stable features for robust and interpretable speech deepfake detection.


Key findings
The WST-X series, particularly WST-X1, achieved superior performance in speech deepfake detection compared to traditional DSP features (Mel, Linear, CQ filterbanks) and standalone PT-XLSR, with WST-X1 reducing minDCF by 15.89% over PT-XLSR. A small averaging scale (J=2), high frequency (Q=10) and directional (L=10) resolutions, and a second scattering order (M=2) were identified as critical WST parameters for effectively capturing subtle spectro-temporal deepfake artifacts. WST features provide more fine-grained and interpretable insights into synthesis artifacts than conventional features.
Approach
The WST-X series integrates the Wavelet Scattering Transform (WST) with Prompt-Tuned XLSR (PT-XLSR) features in two architectural designs: WST-X1 (parallel integration of 1D WST and PT-XLSR) and WST-X2 (cascaded integration where 2D WST processes PT-XLSR latent feature maps). These extracted features are then fed into a Mamba-based classifier to distinguish between real and deepfake speech.
Datasets
Deepfake-Eval-2024 (DE2024)
Model(s)
XLSR-300M (Prompt-Tuned XLSR) for feature extraction, Mamba-based classifier
Author countries
Finland, France, China, Canada