PhonemeDF: A Synthetic Speech Dataset for Audio Deepfake Detection and Naturalness Evaluation

Authors: Vamshi Nallaguntla, Aishwarya Fursule, Shruti Kshirsagar, Anderson R. Avila

Published: 2026-03-16 09:40:56+00:00

Comment: 11 pages, 6 figures, 9 tables. Accepted at the 15th Language Resources and Evaluation Conference (LREC 2026), Palma, Spain

AI Summary

This work introduces PhonemeDF, a new dataset for audio deepfake detection and naturalness evaluation, featuring parallel real and synthetic speech segmented at the phoneme level. It addresses the scarcity of phoneme-level resources by generating synthetic samples using seven advanced TTS and VC systems from LibriSpeech, complete with phoneme alignments. The authors use Kullback-Leibler divergence to quantify fidelity between real and synthetic phoneme distributions, demonstrating a correlation between KLD and classifier performance, suggesting KLD's utility in identifying discriminative phonemes for deepfake detection.

Abstract

The growing sophistication of speech generated by Artificial Intelligence (AI) has introduced new challenges in audio deepfake detection. Text-to-speech (TTS) and voice conversion (VC) technologies can create highly convincing synthetic speech with naturalness and intelligibility. This poses serious threats to voice biometric security and to systems designed to combat the spread of spoken misinformation, where synthetic voices may be used to disseminate false or malicious content. While interest in AI-generated speech has increased, resources for evaluating naturalness at the phoneme level remain limited. In this work, we address this gap by presenting the Phoneme-Level DeepFake dataset (PhonemeDF), comprising parallel real and synthetic speech segmented at the phoneme level. Real speech samples are derived from a subset of LibriSpeech, while synthetic samples are generated using four TTS and three VC systems. For each system, phoneme-aligned TextGrid files are obtained using the Montreal Forced Aligner (MFA). We compute the Kullback-Leibler divergence (KLD) between real and synthetic phoneme distributions to quantify fidelity and establish a ranking based on similarity to natural speech. Our findings show a clear correlation between the KLD of real and synthetic phoneme distributions and the performance of classifiers trained to distinguish them, suggesting that KLD can serve as an indicator of the most discriminative phonemes for deepfake detection.


Key findings
The study found that certain phoneme categories, specifically diphthongs (e.g., /OY/), fricatives (e.g., /SH/, /ZH/), and plosives (e.g., /P/, /T/), consistently provide strong cues for distinguishing synthetic from real speech. Handcrafted features highlight larger acoustic mismatches and show a positive correlation between KLD and detection accuracy, while self-supervised embeddings capture more subtle phonetic inconsistencies, often exhibiting an inverse or weak correlation. Voice conversion systems generally produce larger phoneme-level deviations from natural speech than modern TTS models like VITS.
Approach
The authors construct a dataset (PhonemeDF) of phoneme-aligned real and synthetic speech, derived from LibriSpeech and generated by various TTS/VC systems, using the Montreal Forced Aligner for segmentation. They then analyze the discriminability of phonemes by computing Kullback-Leibler divergence (KLD) between real and synthetic phoneme distributions and evaluate deepfake detection performance using Logistic Regression and SVM classifiers with both handcrafted and self-supervised learning features.
Datasets
LibriSpeech, VCTK, PhonemeDF (newly created)
Model(s)
Logistic Regression (LR), Support Vector Machine (SVM) for classification; WavLM, wav2vec 2.0 for self-supervised embeddings; Log-Mel Spectrograms (LogSpec), Linear-Frequency Cepstral Coefficients (LFCC) for handcrafted features.
Author countries
USA, Canada