Generalizable Speech Deepfake Detection via Information Bottleneck Enhanced Adversarial Alignment
Authors: Pu Huang, Shouguang Wang, Siya Yao, Mengchu Zhou
Published: 2025-09-28 03:48:49+00:00
AI Summary
The paper introduces Information Bottleneck enhanced Confidence-Aware Adversarial Network (IB-CAAN) for generalizable speech deepfake detection. This method employs confidence-guided adversarial alignment to suppress attack-specific artifacts and an information bottleneck to remove nuisance variability, thereby preserving transferable discriminative features. Experiments demonstrate that IB-CAAN consistently outperforms baselines and achieves state-of-the-art performance on many benchmarks, addressing distribution shifts across spoofing methods and other variabilities.
Abstract
Neural speech synthesis techniques have enabled highly realistic speech deepfakes, posing major security risks. Speech deepfake detection is challenging due to distribution shifts across spoofing methods and variability in speakers, channels, and recording conditions. We explore learning shared discriminative features as a path to robust detection and propose Information Bottleneck enhanced Confidence-Aware Adversarial Network (IB-CAAN). Confidence-guided adversarial alignment adaptively suppresses attack-specific artifacts without erasing discriminative cues, while the information bottleneck removes nuisance variability to preserve transferable features. Experiments on ASVspoof 2019/2021, ASVspoof 5, and In-the-Wild demonstrate that IB-CAAN consistently outperforms baseline and achieves state-of-the-art performance on many benchmarks.