Multi-Tast Transformer for Explainable Speech Deepfake Detection via Formant Modeling
Authors: Viola Negroni, Luca Cuccovillo, Paolo Bestagini, Patrick Aichroth, Stefano Tubaro
Published: 2026-01-21 10:34:12+00:00
Comment: Accepted @ IEEE ICASSP 2026
AI Summary
This paper introduces SFATNet-4, a lightweight multi-task transformer for explainable speech deepfake detection. The model simultaneously predicts formant trajectories and voicing patterns while classifying speech as real or fake, providing insights into whether its decisions rely more on voiced or unvoiced regions. It improves upon its predecessor by requiring fewer parameters, training faster, and offering better interpretability without sacrificing prediction performance.
Abstract
In this work, we introduce a multi-task transformer for speech deepfake detection, capable of predicting formant trajectories and voicing patterns over time, ultimately classifying speech as real or fake, and highlighting whether its decisions rely more on voiced or unvoiced regions. Building on a prior speaker-formant transformer architecture, we streamline the model with an improved input segmentation strategy, redesign the decoding process, and integrate built-in explainability. Compared to the baseline, our model requires fewer parameters, trains faster, and provides better interpretability, without sacrificing prediction performance.