SNAP: Speaker Nulling for Artifact Projection in Speech Deepfake Detection

Authors: Kyudan Jung, Jihwan Kim, Minwoo Lee, Soyoon Kim, Jeonghoon Kim, Jaegul Choo, Cheonbok Park

Published: 2026-03-21 07:05:30+00:00

Comment: 9 pages, 3 figures, 2 tables

AI Summary

The paper introduces SNAP, a speaker-nulling framework for speech deepfake detection, addressing the issue of 'speaker entanglement' where self-supervised learning-based encoders over-rely on speaker information. SNAP estimates a speaker subspace and applies orthogonal projection to suppress speaker-dependent components, isolating synthesis artifacts. This approach improves generalization across unseen speakers and achieves state-of-the-art performance with a simple classifier.

Abstract

Recent advancements in text-to-speech technologies enable generating high-fidelity synthetic speech nearly indistinguishable from real human voices. While recent studies show the efficacy of self-supervised learning-based speech encoders for deepfake detection, these models struggle to generalize across unseen speakers. Our quantitative analysis suggests these encoder representations are substantially influenced by speaker information, causing detectors to exploit speaker-specific correlations rather than artifact-related cues. We call this phenomenon speaker entanglement. To mitigate this reliance, we introduce SNAP, a speaker-nulling framework. We estimate a speaker subspace and apply orthogonal projection to suppress speaker-dependent components, isolating synthesis artifacts within the residual features. By reducing speaker entanglement, SNAP encourages detectors to focus on artifact-related patterns, leading to state-of-the-art performance.


Key findings
SNAP achieves state-of-the-art performance, with a 0.35% EER on ASV19LA and 5.42% on ASV21 DF, significantly outperforming baselines. It demonstrates robust generalization to unseen speakers and novel TTS models (CosyVoice2, F5-TTS) with a simple logistic regression classifier (2,049 parameters). The method successfully mitigates speaker entanglement, as evidenced by reduced speaker clustering and enhanced real vs. synthetic speech discriminability, leading to stable EER reduction with more training speakers.
Approach
SNAP addresses speaker entanglement by using a pre-trained WavLM-Large encoder to extract speech features, then estimates a speaker subspace through PCA on speaker centroids. It applies orthogonal projection to nullify speaker-dependent information from these features, leaving residual features that emphasize synthesis artifacts. A simple logistic regression classifier is then trained on these speaker-agnostic features for deepfake detection.
Datasets
ASVspoof 2019 LA, ASVspoof 2021 LA/DF, In-the-Wild (Müller et al., 2022), LibriSpeech, CosyVoice2, F5-TTS
Model(s)
WavLM-Large (feature extractor), PCA (for speaker subspace estimation), Logistic Regression (classifier)
Author countries
South Korea