Towards Robust Speech Deepfake Detection via Human-Inspired Reasoning
Authors: Artem Dvirniak, Evgeny Kushnir, Dmitrii Tarasov, Artem Iudin, Oleg Kiriukhin, Mikhail Pautov, Dmitrii Korzh, Oleg Y. Rogov
Published: 2026-03-11 12:59:12+00:00
AI Summary
This paper introduces HIR-SDD, a novel speech deepfake detection (SDD) framework designed to enhance generalization and interpretability. It integrates Large Audio Language Models (LALMs) with chain-of-thought reasoning, derived from a new human-annotated dataset. Experimental evaluations confirm the method's effectiveness in detection and its ability to provide human-perceptible justifications for predictions.
Abstract
The modern generative audio models can be used by an adversary in an unlawful manner, specifically, to impersonate other people to gain access to private information. To mitigate this issue, speech deepfake detection (SDD) methods started to evolve. Unfortunately, current SDD methods generally suffer from the lack of generalization to new audio domains and generators. More than that, they lack interpretability, especially human-like reasoning that would naturally explain the attribution of a given audio to the bona fide or spoof class and provide human-perceptible cues. In this paper, we propose HIR-SDD, a novel SDD framework that combines the strengths of Large Audio Language Models (LALMs) with the chain-of-thought reasoning derived from the novel proposed human-annotated dataset. Experimental evaluation demonstrates both the effectiveness of the proposed method and its ability to provide reasonable justifications for predictions.