Classical Machine Learning Baselines for Deepfake Audio Detection on the Fake-or-Real Dataset
Authors: Faheem Ahmad, Ajan Ahmed, Masudul Imtiaz
Published: 2026-04-15 01:59:43+00:00
Comment: Accepted for Oral Presentation at The 35th IEEE Microelectronics Design and Test Symposium
AI Summary
This paper establishes an interpretable classical machine learning baseline for deepfake audio detection using the Fake-or-Real (FoR) dataset. It extracts prosodic, voice-quality, and spectral features from audio clips and evaluates various classifiers. The RBF SVM achieves strong performance, highlighting pitch variability and spectral richness as key discriminative cues for separating real from synthetic speech.
Abstract
Deep learning has enabled highly realistic synthetic speech, raising concerns about fraud, impersonation, and disinformation. Despite rapid progress in neural detectors, transparent baselines are needed to reveal which acoustic cues reliably separate real from synthetic speech. This paper presents an interpretable classical machine learning baseline for deepfake audio detection using the Fake-or-Real (FoR) dataset. We extract prosodic, voice-quality, and spectral features from two-second clips at 44.1 kHz (high-fidelity) and 16 kHz (telephone-quality) sampling rates. Statistical analysis (ANOVA, correlation heatmaps) identifies features that differ significantly between real and fake speech. We then train multiple classifiers -- Logistic Regression, LDA, QDA, Gaussian Naive Bayes, SVMs, and GMMs -- and evaluate performance using accuracy, ROC-AUC, EER, and DET curves. Pairwise McNemar's tests confirm statistically significant differences between models. The best model, an RBF SVM, achieves ~93% test accuracy and ~7% EER on both sampling rates, while linear models reach ~75% accuracy. Feature analysis reveals that pitch variability and spectral richness (spectral centroid, bandwidth) are key discriminative cues. These results provide a strong, interpretable baseline for future deepfake audio detectors.