PhonemeDF: A Synthetic Speech Dataset for Audio Deepfake Detection and Naturalness Evaluation
Authors: Vamshi Nallaguntla, Aishwarya Fursule, Shruti Kshirsagar, Anderson R. Avila
Published: 2026-03-16 09:40:56+00:00
Comment: 11 pages, 6 figures, 9 tables. Accepted at the 15th Language Resources and Evaluation Conference (LREC 2026), Palma, Spain
AI Summary
This work introduces PhonemeDF, a new dataset for audio deepfake detection and naturalness evaluation, featuring parallel real and synthetic speech segmented at the phoneme level. It addresses the scarcity of phoneme-level resources by generating synthetic samples using seven advanced TTS and VC systems from LibriSpeech, complete with phoneme alignments. The authors use Kullback-Leibler divergence to quantify fidelity between real and synthetic phoneme distributions, demonstrating a correlation between KLD and classifier performance, suggesting KLD's utility in identifying discriminative phonemes for deepfake detection.
Abstract
The growing sophistication of speech generated by Artificial Intelligence (AI) has introduced new challenges in audio deepfake detection. Text-to-speech (TTS) and voice conversion (VC) technologies can create highly convincing synthetic speech with naturalness and intelligibility. This poses serious threats to voice biometric security and to systems designed to combat the spread of spoken misinformation, where synthetic voices may be used to disseminate false or malicious content. While interest in AI-generated speech has increased, resources for evaluating naturalness at the phoneme level remain limited. In this work, we address this gap by presenting the Phoneme-Level DeepFake dataset (PhonemeDF), comprising parallel real and synthetic speech segmented at the phoneme level. Real speech samples are derived from a subset of LibriSpeech, while synthetic samples are generated using four TTS and three VC systems. For each system, phoneme-aligned TextGrid files are obtained using the Montreal Forced Aligner (MFA). We compute the Kullback-Leibler divergence (KLD) between real and synthetic phoneme distributions to quantify fidelity and establish a ranking based on similarity to natural speech. Our findings show a clear correlation between the KLD of real and synthetic phoneme distributions and the performance of classifiers trained to distinguish them, suggesting that KLD can serve as an indicator of the most discriminative phonemes for deepfake detection.