Indic-CodecFake meets SATYAM: Towards Detecting Neural Audio Codec Synthesized Speech Deepfakes in Indic Languages

Authors: Girish, Mohd Mujtaba Akhtar, Orchid Chetia Phukan, Arun Balaji Buduru

Published: 2026-04-21 19:54:54+00:00

Comment: Accepted to ACL 2026

AI Summary

This paper addresses the challenge of CodecFake detection in Indic languages by introducing Indic-CodecFake (ICF), the first large-scale benchmark of real and NAC-synthesized speech. It proposes SATYAM, a novel hyperbolic Audio Large Language Model (ALM), which effectively integrates semantic and prosodic representations using Bhattacharya distance in hyperbolic space. Experiments demonstrate that SATYAM significantly outperforms state-of-the-art detectors and ALM-based baselines, exhibiting robust generalization across Indic languages and unseen codecs.

Abstract

The rapid advancement of Audio Large Language Models (ALMs), driven by Neural Audio Codecs (NACs), has led to the emergence of highly realistic speech deepfakes, commonly referred to as CodecFakes (CFs). Consequently, CF detection has attracted increasing attention from the research community. However, existing studies predominantly focus on English or Chinese, leaving the vulnerability of Indic languages largely unexplored. To bridge this gap, we introduce Indic-CodecFake (ICF) dataset, the first large-scale benchmark comprising real and NAC-synthesized speech across multiple Indic languages, diverse speaker profiles, and multiple NAC types. We use IndicSUPERB as the real speech corpus for generation of ICF dataset. Our experiments demonstrate that state-of-the-art (SOTA) CF detectors trained on English-centric datasets fail to generalize to ICF, underscoring the challenges posed by phonetic diversity and prosodic variability in Indic speech. Further, we present systematic evaluation of SOTA ALMs in a zero-shot setting on ICF dataset. We evaluate these ALMs as they have shown effectiveness for different speech tasks. However, our findings reveal that current ALMs exhibit consistently poor performance. To address this, we propose SATYAM, a novel hyperbolic ALM tailored for CF detection in Indic languages. SATYAM integrates semantic representations from Whisper and prosodic representations from TRILLsson using through Bhattacharya distance in hyperbolic space and subsequently performs the same alignment procedure between the fused speech representation and an input conditioning prompt. This dual-stage fusion framework enables SATYAM to effectively model hierarchical relationships both within speech (semantic-prosodic) and across modalities (speech-text). Extensive evaluations show that SATYAM consistently outperforms competitive end-to-end and ALM-based baselines on the ICF benchmark.


Key findings
State-of-the-art CodecFake detectors and ALMs trained on English-centric datasets fail to generalize to Indic languages, underscoring a significant performance gap. SATYAM consistently outperforms competitive end-to-end and ALM-based baselines on the ICF benchmark, achieving 98.32% accuracy and 3.27% EER. Furthermore, SATYAM demonstrates strong generalization across various Indic languages, language families, and unseen neural audio codecs or noisy conditions.
Approach
SATYAM is a supervised hyperbolic ALM that extracts semantic representations from Whisper and prosodic representations from TRILLsson. These representations are projected into a shared Euclidean space, mapped to hyperbolic space, and then fused using Bhattacharya distance in a dual-stage alignment process (speech-speech and speech-text). The final aggregated representation is conditioned to a frozen Qwen2-7B LLM decoder, which outputs a 'Real' or 'Fake' decision.
Datasets
Indic-CodecFake (ICF), IndicSUPERB, CodecFake
Model(s)
Whisper, TRILLsson, Qwen2-7B (LLM decoder for SATYAM); AASIST, Wav2vec2, Qwen2-Audio, Pengi, Audio Flamingo (for baselines)
Author countries
India