Toward Noise-Aware Audio Deepfake Detection: Survey, SNR-Benchmarks, and Practical Recipes
Authors: Udayon Sen, Alka Luqman, Anupam Chattopadhyay
Published: 2025-12-15 02:22:37+00:00
AI Summary
This paper surveys and evaluates the robustness of state-of-the-art audio deepfake detection models against background noise by introducing a reproducible benchmark framework across controlled Signal-to-Noise Ratios (SNRs). It mixes ASVspoof 2021 DF utterances with MS-SNSD noises to quantify performance degradation from near-clean (35 dB) to very noisy (-5 dB). Finntuning the encoders showed substantial improvements in noise robustness compared to frozen baselines, particularly at low SNRs.
Abstract
Deepfake audio detection has progressed rapidly with strong pre-trained encoders (e.g., WavLM, Wav2Vec2, MMS). However, performance in realistic capture conditions - background noise (domestic/office/transport), room reverberation, and consumer channels - often lags clean-lab results. We survey and evaluate robustness for state-of-the-art audio deepfake detection models and present a reproducible framework that mixes MS-SNSD noises with ASVspoof 2021 DF utterances to evaluate under controlled signal-to-noise ratios (SNRs). SNR is a measured proxy for noise severity used widely in speech; it lets us sweep from near-clean (35 dB) to very noisy (-5 dB) to quantify graceful degradation. We study multi-condition training and fixed-SNR testing for pretrained encoders (WavLM, Wav2Vec2, MMS), reporting accuracy, ROC-AUC, and EER on binary and four-class (authenticity x corruption) tasks. In our experiments, finetuning reduces EER by 10-15 percentage points at 10-0 dB SNR across backbones.