Generalizable Speech Deepfake Detection via Information Bottleneck Enhanced Adversarial Alignment
Authors: Pu Huang, Shouguang Wang, Siya Yao, Mengchu Zhou
Published: 2025-09-28 03:48:49+00:00
AI Summary
This paper addresses the challenge of distribution shifts in speech deepfake detection by proposing the Information Bottleneck enhanced Confidence-Aware Adversarial Network (IB-CAAN). IB-CAAN aims to learn robust and shared discriminative features by suppressing attack-specific artifacts and minimizing nuisance variability across domains. The method achieves state-of-the-art generalization performance across several standard and cross-dataset benchmarks.
Abstract
Neural speech synthesis techniques have enabled highly realistic speech deepfakes, posing major security risks. Speech deepfake detection is challenging due to distribution shifts across spoofing methods and variability in speakers, channels, and recording conditions. We explore learning shared discriminative features as a path to robust detection and propose Information Bottleneck enhanced Confidence-Aware Adversarial Network (IB-CAAN). Confidence-guided adversarial alignment adaptively suppresses attack-specific artifacts without erasing discriminative cues, while the information bottleneck removes nuisance variability to preserve transferable features. Experiments on ASVspoof 2019/2021, ASVspoof 5, and In-the-Wild demonstrate that IB-CAAN consistently outperforms baseline and achieves state-of-the-art performance on many benchmarks.