Fine-Grained Frame Modeling in Multi-head Self-Attention for Speech Deepfake Detection

Authors: Tuan Dat Phuong, Duc-Tuan Truong, Long-Vu Hoang, Trang Nguyen Thi Thu

Published: 2026-02-04 16:12:51+00:00

Comment: Accepted by ICASSP 2026

AI Summary

This paper proposes Fine-Grained Frame Modeling (FGFM) for Transformer-based speech deepfake detection, focusing on capturing subtle, localized artifacts. FGFM introduces a multi-head voting (MHV) module to select the most informative frames and a cross-layer refinement (CLR) module to enhance their discriminative power. The method consistently outperforms baseline models and achieves strong performance across various benchmarks.

Abstract

Transformer-based models have shown strong performance in speech deepfake detection, largely due to the effectiveness of the multi-head self-attention (MHSA) mechanism. MHSA provides frame-level attention scores, which are particularly valuable because deepfake artifacts often occur in small, localized regions along the temporal dimension of speech. This makes fine-grained frame modeling essential for accurately detecting subtle spoofing cues. In this work, we propose fine-grained frame modeling (FGFM) for MHSA-based speech deepfake detection, where the most informative frames are first selected through a multi-head voting (MHV) module. These selected frames are then refined via a cross-layer refinement (CLR) module to enhance the model's ability to learn subtle spoofing cues. Experimental results demonstrate that our method outperforms the baseline model and achieves Equal Error Rate (EER) of 0.90%, 1.88%, and 6.64% on the LA21, DF21, and ITW datasets, respectively. These consistent improvements across multiple benchmarks highlight the effectiveness of our fine-grained modeling for robust speech deepfake detection.


Key findings
The proposed FGFM method outperforms the baseline XLSR-Conformer model, achieving EERs of 0.90%, 1.88%, and 6.64% on the LA21, DF21, and ITW datasets, respectively. It demonstrates relative EER reductions of 7.2%, 27.1%, and 21.1% compared to the baseline, showcasing strong generalization and robustness to unseen spoofing conditions, especially on the out-of-domain ITW dataset. The approach is also shown to be beneficial and generalizable to both Conformer-based and Transformer-based architectures.
Approach
The authors propose Fine-Grained Frame Modeling (FGFM) which incorporates a Multi-Head Voting (MHV) module to select the most informative frames from each attention head of a Multi-Head Self-Attention (MHSA) mechanism. These selected frames are then processed and refined using a Cross-Layer Refinement (CLR) module, which includes additional Conformer blocks and a Dynamic Aggregation Feed-Forward (DAFF) block, to enhance the classification token's ability to capture subtle spoofing cues.
Datasets
ASVspoof 2019 LA training set, ASVspoof 2021 LA (21LA), ASVspoof 2021 DF (21DF), In-the-Wild (ITW) dataset
Model(s)
XLSR-Conformer (baseline), XLSR-Transformer (baseline), Multi-Head Self-Attention (MHSA), Multi-Head Voting (MHV) module, Cross-Layer Refinement (CLR) module, Dynamic Aggregation Feed-Forward (DAFF) block
Author countries
Vietnam, Singapore