Emoanti: audio anti-deepfake with refined emotion-guided representations

Authors: Xiaokang Li, Yicheng Gong, Dinghao Zou, Xin Cao, Sunbowen Lee

Published: 2025-09-13 01:58:34+00:00

AI Summary

EmoAnti is a novel audio deepfake detection system that leverages emotional cues for improved generalization. It uses a Wav2Vec2 model fine-tuned on emotion recognition and a convolutional residual feature extractor to refine emotional representations, achieving state-of-the-art performance on ASVspoof benchmarks.

Abstract

Audio deepfake is so sophisticated that the lack of effective detection methods is fatal. While most detection systems primarily rely on low-level acoustic features or pretrained speech representations, they frequently neglect high-level emotional cues, which can offer complementary and potentially anti-deepfake information to enhance generalization. In this work, we propose a novel audio anti-deepfake system that utilizes emotional features (EmoAnti) by exploiting a pretrained Wav2Vec2 (W2V2) model fine-tuned on emotion recognition tasks, which derives emotion-guided representations, then designing a dedicated feature extractor based on convolutional layers with residual connections to effectively capture and refine emotional characteristics from the transformer layers outputs. Experimental results show that our proposed architecture achieves state-of-the-art performance on both the ASVspoof2019LA and ASVspoof2021LA benchmarks, and demonstrates strong generalization on the ASVspoof2021DF dataset. Our proposed approach's code is available at Anonymous GitHub1.


Key findings
EmoAnti achieves state-of-the-art performance on ASVspoof2019LA and strong results on ASVspoof2021LA/DF. Ablation studies confirm the importance of both emotion-guided representations and the convolutional residual feature extractor for effective deepfake detection. The model shows better generalization on ASVspoof2021LA compared to ASVspoof2021DF.
Approach
EmoAnti fine-tunes a Wav2Vec2 model on emotion recognition to extract emotion-guided representations. A convolutional residual feature extractor refines these representations, capturing subtle emotional discrepancies between real and fake audio. These refined features are then used for classification.
Datasets
IEMOCAP (for fine-tuning), ASVspoof2019LA, ASVspoof2021LA, ASVspoof2021DF
Model(s)
Wav2Vec2 (fine-tuned for emotion recognition), convolutional residual feature extractor, linear classifier
Author countries
China