WST-X Series: Wavelet Scattering Transform for Interpretable Speech Deepfake Detection
Authors: Xi Xuan, Davide Carbone, Ruchi Pandey, Wenxin Zhang, Tomi H. Kinnunen
Published: 2026-02-03 01:39:28+00:00
Comment: Submitted to IEEE Signal Processing Letters
AI Summary
This paper introduces the WST-X series, a novel family of feature extractors for interpretable speech deepfake detection that combines the wavelet scattering transform (WST) with self-supervised learning (SSL) features. By investigating 1D and 2D WSTs, the approach aims to capture both acoustic details and higher-order structural anomalies. Experimental results on the Deepfake-Eval-2024 dataset demonstrate that WST-X significantly outperforms existing front-ends, emphasizing the importance of specific WST parameters for detecting subtle artifacts.
Abstract
Designing front-ends for speech deepfake detectors primarily focuses on two categories. Hand-crafted filterbank features are transparent but are limited in capturing high-level semantic details, often resulting in performance gaps compared to self-supervised (SSL) features. SSL features, in turn, lack interpretability and may overlook fine-grained spectral anomalies. We propose the WST-X series, a novel family of feature extractors that combines the best of both worlds via the wavelet scattering transform (WST), integrating wavelets with nonlinearities analogous to deep convolutional networks. We investigate 1D and 2D WSTs to extract acoustic details and higher-order structural anomalies, respectively. Experimental results on the recent and challenging Deepfake-Eval-2024 dataset indicate that WST-X outperforms existing front-ends by a wide margin. Our analysis reveals that a small averaging scale ($J$), combined with high-frequency and directional resolutions ($Q, L$), is critical for capturing subtle artifacts. This underscores the value of translation-invariant and deformation-stable features for robust and interpretable speech deepfake detection.