SceneFake: An Initial Dataset and Benchmarks for Scene Fake Audio Detection

Authors: Jiangyan Yi, Chenglong Wang, Jianhua Tao, Chu Yuan Zhang, Cunhang Fan, Zhengkun Tian, Haoxin Ma, Ruibo Fu

Published: 2022-11-11 09:05:50+00:00

Comment: Accepted by Pattern Recognition, 1 April 2024

AI Summary

This paper introduces SceneFake, a novel dataset for detecting scene-manipulated audio, where an original audio's acoustic scene is altered using speech enhancement technologies. The dataset aims to address a gap in existing fake audio datasets, which primarily focus on timbre, prosody, content, or channel noise manipulation. Benchmarks on SceneFake using baseline models indicate that these models struggle to reliably detect scene fake utterances, especially on unseen test sets, despite performing well on seen data.

Abstract

Many datasets have been designed to further the development of fake audio detection. However, fake utterances in previous datasets are mostly generated by altering timbre, prosody, linguistic content or channel noise of original audio. These datasets leave out a scenario, in which the acoustic scene of an original audio is manipulated with a forged one. It will pose a major threat to our society if some people misuse the manipulated audio with malicious purpose. Therefore, this motivates us to fill in the gap. This paper proposes such a dataset for scene fake audio detection named SceneFake, where a manipulated audio is generated by only tampering with the acoustic scene of an real utterance by using speech enhancement technologies. Some scene fake audio detection benchmark results on the SceneFake dataset are reported in this paper. In addition, an analysis of fake attacks with different speech enhancement technologies and signal-to-noise ratios are presented in this paper. The results indicate that scene fake utterances cannot be reliably detected by baseline models trained on the ASVspoof 2019 dataset. Although these models perform well on the SceneFake training set and seen testing set, their performance is poor on the unseen test set. The dataset (https://zenodo.org/record/7663324#.Y_XKMuPYuUk) and benchmark source codes (https://github.com/ADDchallenge/SceneFake) are publicly available.


Key findings
Existing baseline models (GMM, LCNN, RawNet2) trained on ASVspoof 2019 or even noisy LA datasets perform poorly on scene fake audio detection tasks. While these models achieve good performance on the SceneFake seen test set, their performance significantly degrades on the unseen test set, indicating a challenge in generalizing to unknown scene manipulations. Detection performance varies with SNR and the type of speech enhancement used to create the fake, with models struggling particularly at -5dB and 20dB on unseen data.
Approach
The authors developed a new dataset, SceneFake, by manipulating the acoustic scene of real utterances. This involves a two-step process: enhancing real speech to remove its original scene using various speech enhancement technologies, and then adding a different, forged acoustic scene to the enhanced speech. They then benchmarked existing audio deepfake detection models on this new dataset.
Datasets
SceneFake, ASVspoof 2019 (LA dataset), DCASE 2022 Challenge (acoustic scene dataset), WSJ0-SI84 (for training SE models)
Model(s)
GMM, LCNN, RawNet2
Author countries
China