Multilingual Dataset Integration Strategies for Robust Audio Deepfake Detection: A SAFE Challenge System
Authors: Hashim Ali, Surya Subramani, Lekha Bollinani, Nithin Sai Adupa, Sali El-Loh, Hafiz Malik
Published: 2025-08-28 16:37:50+00:00
AI Summary
This research paper explores multilingual dataset integration strategies for robust audio deepfake detection. By systematically experimenting with self-supervised learning front-ends and various dataset combinations, the authors achieved second place in two tasks of the SAFE Challenge, demonstrating strong generalization and robustness.
Abstract
The SAFE Challenge evaluates synthetic speech detection across three tasks: unmodified audio, processed audio with compression artifacts, and laundered audio designed to evade detection. We systematically explore self-supervised learning (SSL) front-ends, training data compositions, and audio length configurations for robust deepfake detection. Our AASIST-based approach incorporates WavLM large frontend with RawBoost augmentation, trained on a multilingual dataset of 256,600 samples spanning 9 languages and over 70 TTS systems from CodecFake, MLAAD v5, SpoofCeleb, Famous Figures, and MAILABS. Through extensive experimentation with different SSL front-ends, three training data versions, and two audio lengths, we achieved second place in both Task 1 (unmodified audio detection) and Task 3 (laundered audio detection), demonstrating strong generalization and robustness.