Emotion and Acoustics Should Agree: Cross-Level Inconsistency Analysis for Audio Deepfake Detection
Authors: Jinhua Zhang, Zhenqi Jia, Rui Liu
Published: 2026-01-20 11:01:26+00:00
Comment: Accepted by ICASSP 2026
AI Summary
This paper proposes EAI-ADD, a novel audio deepfake detection framework that leverages cross-level emotion-acoustic inconsistency as the primary detection signal. It addresses limitations of prior methods that isolate features or rely on correlation, which often overlook subtle desynchronization and abrupt discontinuities in spoofed speech. EAI-ADD projects emotional and acoustic representations into a comparable space and progressively integrates frame-level and utterance-level emotion features with acoustic features to capture inconsistencies across different temporal granularities.
Abstract
Audio Deepfake Detection (ADD) aims to detect spoof speech from bonafide speech. Most prior studies assume that stronger correlations within or across acoustic and emotional features imply authenticity, and thus focus on enhancing or measuring such correlations. However, existing methods often treat acoustic and emotional features in isolation or rely on correlation metrics, which overlook subtle desynchronization between them and smooth out abrupt discontinuities. To address these issues, we propose EAI-ADD, which treats cross level emotion acoustic inconsistency as the primary detection signal. We first project emotional and acoustic representations into a comparable space. Then we progressively integrate frame level and utterance level emotion features with acoustic features to capture cross level emotion acoustic inconsistencies across different temporal granularities. Experimental results on the ASVspoof 2019LA and 2021LA datasets demonstrate that the proposed EAI-ADD outperforms baselines, providing a more effective solution for audio anti spoofing detection.