EmoFake: An Initial Dataset for Emotion Fake Audio Detection

Authors: Yan Zhao, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Xiaohui Zhang, Yongfeng Dong

Published: 2022-11-10 06:09:51+00:00

AI Summary

This paper introduces EmoFake, a novel dataset designed for detecting emotion fake audio, where the emotion state of speech is altered while other information like speaker identity and content remains unchanged. The dataset is generated using seven open-source emotion voice conversion models. Benchmark experiments using existing fake audio detection models demonstrate that EmoFake poses a significant challenge, revealing a notable degradation in their performance against emotion fake audio.

Abstract

Many datasets have been designed to further the development of fake audio detection, such as datasets of the ASVspoof and ADD challenges. However, these datasets do not consider a situation that the emotion of the audio has been changed from one to another, while other information (e.g. speaker identity and content) remains the same. Changing the emotion of an audio can lead to semantic changes. Speech with tampered semantics may pose threats to people's lives. Therefore, this paper reports our progress in developing such an emotion fake audio detection dataset involving changing emotion state of the origin audio named EmoFake. The fake audio in EmoFake is generated by open source emotion voice conversion models. Furthermore, we proposed a method named Graph Attention networks using Deep Emotion embedding (GADE) for the detection of emotion fake audio. Some benchmark experiments are conducted on this dataset. The results show that our designed dataset poses a challenge to the fake audio detection model trained with the LA dataset of ASVspoof 2019. The proposed GADE shows good performance in the face of emotion fake audio.


Key findings
Existing fake audio detection models trained on standard datasets like ASVspoof 2019 and ADD 2022 show a significant decrease in discriminative ability (increased EER) when faced with emotion fake audio from the EmoFake dataset. This highlights that current detection systems are vulnerable to emotion manipulation. Furthermore, training these models directly on EmoFake significantly enhances their performance in detecting emotion fake audio, with AASIST performing best for English emotion fake audio and RawNet2 for Chinese.
Approach
The authors developed EmoFake by using existing emotional speech (ESD) and applying seven different open-source Emotion Voice Conversion (EVC) models to change the emotion state of the audio. They then evaluated the performance of five established fake audio detection models (LCNN, RawNet2, SENet, ResNet34, AASIST) on EmoFake, comparing results with standard deepfake datasets.
Datasets
EmoFake, Emotional Speech Database (ESD), ASVspoof 2019 LA dataset, ASVspoof 2021 eval, ADD 2022 track 3.2, ADD 2023 track 1.2 R1
Model(s)
LCNN, RawNet2, SENet, ResNet34, AASIST
Author countries
China