Continual Audio Deepfake Detection via Universal Adversarial Perturbation
Authors: Wangjie Li, Lin Li, Qingyang Hong
Published: 2025-11-25 06:41:11+00:00
AI Summary
The paper proposes a novel framework for continual audio deepfake detection (ADD) to overcome catastrophic forgetting against evolving spoofing attacks. It leverages Universal Adversarial Perturbation (UAP) integrated into the model fine-tuning process, allowing the system to retain knowledge of historical spoofing distributions without storing past training data. This approach aims to provide an efficient and robust solution for continual learning in ADD by utilizing pseudo-spoofed samples and knowledge distillation.
Abstract
The rapid advancement of speech synthesis and voice conversion technologies has raised significant security concerns in multimedia forensics. Although current detection models demonstrate impressive performance, they struggle to maintain effectiveness against constantly evolving deepfake attacks. Additionally, continually fine-tuning these models using historical training data incurs substantial computational and storage costs. To address these limitations, we propose a novel framework that incorporates Universal Adversarial Perturbation (UAP) into audio deepfake detection, enabling models to retain knowledge of historical spoofing distribution without direct access to past data. Our method integrates UAP seamlessly with pre-trained self-supervised audio models during fine-tuning. Extensive experiments validate the effectiveness of our approach, showcasing its potential as an efficient solution for continual learning in audio deepfake detection.