SEA-Spoof: Bridging The Gap in Multilingual Audio Deepfake Detection for South-East Asian

Authors: Jinyang Wu, Nana Hou, Zihan Pan, Qiquan Zhang, Sailor Hardik Bhupendra, Soumik Mondal

Published: 2025-09-24 08:11:51+00:00

AI Summary

This paper introduces SEA-Spoof, the first large-scale audio deepfake detection dataset for South-East Asian languages. It benchmarks state-of-the-art models, revealing significant cross-lingual performance degradation, but demonstrates that fine-tuning on SEA-Spoof substantially improves detection accuracy.

Abstract

The rapid growth of the digital economy in South-East Asia (SEA) has amplified the risks of audio deepfakes, yet current datasets cover SEA languages only sparsely, leaving models poorly equipped to handle this critical region. This omission is critical: detection models trained on high-resource languages collapse when applied to SEA, due to mismatches in synthesis quality, language-specific characteristics, and data scarcity. To close this gap, we present SEA-Spoof, the first large-scale Audio Deepfake Detection (ADD) dataset especially for SEA languages. SEA-Spoof spans 300+ hours of paired real and spoof speech across Tamil, Hindi, Thai, Indonesian, Malay, and Vietnamese. Spoof samples are generated from a diverse mix of state-of-the-art open-source and commercial systems, capturing wide variability in style and fidelity. Benchmarking state-of-the-art detection models reveals severe cross-lingual degradation, but fine-tuning on SEA-Spoof dramatically restores performance across languages and synthesis sources. These results highlight the urgent need for SEA-focused research and establish SEA-Spoof as a foundation for developing robust, cross-lingual, and fraud-resilient detection systems.


Key findings
State-of-the-art models trained on high-resource languages perform poorly on SEA-Spoof, demonstrating a significant cross-lingual generalization gap. Fine-tuning on SEA-Spoof dramatically improves performance, highlighting its value. Commercial deepfake systems proved harder to detect than open-source ones, with varying difficulty across languages.
Approach
The authors created SEA-Spoof, a multilingual audio deepfake detection dataset encompassing six South-East Asian languages. They then benchmarked existing models on this dataset, highlighting the performance gap between high-resource and low-resource languages, and demonstrated performance improvement through fine-tuning.
Datasets
SEA-Spoof (created by the authors), ASVspoof 2019, ASVspoof 5, MLAAD (Multi-Language Audio Anti-Spoofing Dataset), DFADD, Fake-or-Real, Mozilla Common Voice, Indic Speech Corpora, GigaSpeech2, Malay Conversational Speech Corpus, Malaysian YouTube Whisper-Large set, Thai Dialect Corpus, VIVOS corpus.
Model(s)
AASIST, AASIST3, MoLEx (with WavLM as feature extractor)
Author countries
Singapore