Exposing AI-Synthesized Human Voices Using Neural Vocoder Artifacts

Authors: Chengzhe Sun, Shan Jia, Shuwei Hou, Ehab AlBadawy, Siwei Lyu

Published: 2023-02-18 00:29:22+00:00

Comment: Dataset and codes will be available at https://github.com/csun22/LibriVoc-Dataset

AI Summary

This work introduces a novel approach to detect AI-synthesized human voices by identifying artifacts inherent to neural vocoders, which are core components in most DeepFake audio synthesis models. It proposes a multi-task learning framework for a binary-class RawNet2 model, where a shared front-end feature extractor is constrained by a vocoder identification pretext task. This strategy forces the feature extractor to focus on vocoder artifacts, yielding highly discriminative features for robust synthetic voice detection.

Abstract

The advancements of AI-synthesized human voices have introduced a growing threat of impersonation and disinformation. It is therefore of practical importance to developdetection methods for synthetic human voices. This work proposes a new approach to detect synthetic human voices based on identifying artifacts of neural vocoders in audio signals. A neural vocoder is a specially designed neural network that synthesizes waveforms from temporal-frequency representations, e.g., mel-spectrograms. The neural vocoder is a core component in most DeepFake audio synthesis models. Hence the identification of neural vocoder processing implies that an audio sample may have been synthesized. To take advantage of the vocoder artifacts for synthetic human voice detection, we introduce a multi-task learning framework for a binary-class RawNet2 model that shares the front-end feature extractor with a vocoder identification module. We treat the vocoder identification as a pretext task to constrain the front-end feature extractor to focus on vocoder artifacts and provide discriminative features for the final binary classifier. Our experiments show that the improved RawNet2 model based on vocoder identification achieves an overall high classification performance on the binary task.


Key findings
The proposed multi-task learning approach with vocoder identification achieved superior classification performance, outperforming baseline methods with EERs of 1.41% on LibriVoc, 0.19% on WaveFake, and 4.54% on ASVspoof 2019. The method also demonstrated good robustness against common post-processing operations like resampling and background noise. Ablation studies confirmed that a balanced weighting between the binary classification and multi-class vocoder identification losses was optimal for performance.
Approach
The authors propose a multi-task learning framework using a RawNet2 model for synthetic human voice detection. This model shares its front-end feature extractor with a vocoder identification module, which serves as a pretext task. By training the feature extractor to identify various neural vocoders, it becomes sensitive to the subtle artifacts left by these vocoders, thereby providing discriminative features for the primary binary classification task of distinguishing real from synthetic voices.
Datasets
LibriVoc Dataset (newly constructed from LibriTTS/Librispeech/LibriVox), WaveFake Dataset (derived from LJSPEECH), ASVspoof 2019 Dataset (derived from VCTK base corpus).
Model(s)
RawNet2
Author countries
USA