Towards Explicit Acoustic Evidence Perception in Audio LLMs for Speech Deepfake Detection

Authors: Xiaoxuan Guo, Yuankun Xie, Haonan Cheng, Jiayi Zhou, Jian Liu, Hengyan Huang, Long Ye, Qin Zhang

Published: 2026-01-30 15:16:43+00:00

Comment: 9 pages, 4 figures

AI Summary

This paper addresses the limitation of audio Large Language Models (LLMs) in speech deepfake detection, where their semantic understanding often biases them to overlook subtle acoustic artifacts. The authors introduce SDD-APALLM, an acoustically enhanced framework that explicitly exposes fine-grained time-frequency evidence (spectrograms) alongside raw audio to improve the model's perception of acoustic inconsistencies. This approach aims to provide audio LLMs with better access to crucial acoustic cues without compromising their semantic understanding, leading to more robust and accurate deepfake detection.

Abstract

Speech deepfake detection (SDD) focuses on identifying whether a given speech signal is genuine or has been synthetically generated. Existing audio large language model (LLM)-based methods excel in content understanding; however, their predictions are often biased toward semantically correlated cues, which results in fine-grained acoustic artifacts being overlooked during the decisionmaking process. Consequently, fake speech with natural semantics can bypass detectors despite harboring subtle acoustic anomalies; this suggests that the challenge stems not from the absence of acoustic data, but from its inadequate accessibility when semantic-dominant reasoning prevails. To address this issue, we investigate SDD within the audio LLM paradigm and introduce SDD with Auditory Perception-enhanced Audio Large Language Model (SDD-APALLM), an acoustically enhanced framework designed to explicitly expose fine-grained time-frequency evidence as accessible acoustic cues. By combining raw audio with structured spectrograms, the proposed framework empowers audio LLMs to more effectively capture subtle acoustic inconsistencies without compromising their semantic understanding. Experimental results indicate consistent gains in detection accuracy and robustness, especially in cases where semantic cues are misleading. Further analysis reveals that these improvements stem from a coordinated utilization of semantic and acoustic information, as opposed to simple modality aggregation.


Key findings
The SDD-APALLM framework achieved consistent gains in detection accuracy and robustness, particularly in scenarios where semantic cues could be misleading. Experimental results showed that explicitly exposing acoustic evidence improved performance, especially under domain shift, by steering the model toward acoustically grounded evidence rather than semantically correlated shortcuts. This suggests a coordinated utilization of semantic and acoustic information, rather than simple aggregation, leads to more stable and reliable detection.
Approach
The proposed SDD-APALLM framework enhances audio LLMs for deepfake detection by explicitly providing structured time-frequency representations (Constant-Q Transform spectrograms) as visual tokens alongside raw audio input. This intervention guides the audio LLM, built on Qwen2.5-Omni, to coordinate both semantic (from raw audio) and fine-grained acoustic (from spectrograms processed by a Vision Transformer) information, making acoustic artifacts more accessible for detection decisions and mitigating reliance on semantic-dominant shortcuts.
Datasets
ASVspoof2019 LA, ASVspoof2021 LA
Model(s)
Qwen2.5-Omni (3B and 7B parameters), Whisper audio encoder, Vision Transformer (ViT) for spectrogram processing
Author countries
China