Analyzing Reasoning Shifts in Audio Deepfake Detection under Adversarial Attacks: The Reasoning Tax versus Shield Bifurcation

Authors: Binh Nguyen, Thai Le

Published: 2026-01-07 05:46:45+00:00

AI Summary

This paper analyzes the robustness of reasoning shifts in Audio Language Models (ALMs) used for audio deepfake detection (ADDs) under adversarial attacks. The authors propose a forensic auditing framework evaluating acoustic perception, cognitive coherence, and cognitive dissonance. They find that explicit reasoning either acts as a defensive "shield" for robust models or imposes a performance "tax" on others, while high dissonance serves as a valuable "silent alarm."

Abstract

Audio Language Models (ALMs) offer a promising shift towards explainable audio deepfake detections (ADDs), moving beyond \\textit{black-box} classifiers by providing some level of transparency into their predictions via reasoning traces. This necessitates a new class of model robustness analysis: robustness of the predictive reasoning under adversarial attacks, which goes beyond existing paradigm that mainly focuses on the shifts of the final predictions (e.g., fake v.s. real). To analyze such reasoning shifts, we introduce a forensic auditing framework to evaluate the robustness of ALMs' reasoning under adversarial attacks in three inter-connected dimensions: acoustic perception, cognitive coherence, and cognitive dissonance. Our systematic analysis reveals that explicit reasoning does not universally enhance robustness. Instead, we observe a bifurcation: for models exhibiting robust acoustic perception, reasoning acts as a defensive \\textit{``shield''}, protecting them from adversarial attacks. However, for others, it imposes a performance \\textit{``tax''}, particularly under linguistic attacks which reduce cognitive coherence and increase attack success rate. Crucially, even when classification fails, high cognitive dissonance can serve as a \\textit{silent alarm}, flagging potential manipulation. Overall, this work provides a critical evaluation of the role of reasoning in forensic audio deepfake analysis and its vulnerabilities.


Key findings
Reasoning introduces a bifurcation: it acts as a defensive 'Shield' for acoustically robust models (like Qwen2) but imposes a performance 'Tax' on weaker models. High cognitive dissonance is a critical 'silent alarm' that flags potential manipulation, especially under acoustic attacks. Linguistic attacks are dangerous as they induce 'Systemic Deception,' wherein the model maintains high coherence while confidently justifying its errors.
Approach
The authors audit the robustness of Audio Language Models (ALMs) using Chain-of-Thought (CoT) reasoning under linguistic (TAPAS) and acoustic adversarial attacks. They introduce a three-tier forensic framework evaluating Acoustic Perception (grounding), Cognitive Coherence (internal logic consistency), and Cognitive Dissonance (conflict between reasoning and final verdict).
Datasets
ASVSpoof 2019 (Logical Access)
Model(s)
Qwen2-Audio-7B, Phi-4-multimodal, gemma-3n-E4B, granite-3.3-8b, AASIST-2, RawNet-2, CLAD
Author countries
USA