Zero-Shot to Zero-Lies: Detecting Bengali Deepfake Audio through Transfer Learning

Authors: Most. Sharmin Sultana Samu, Md. Rakibul Islam, Md. Zahid Hossain, Md. Kamrozzaman Bhuiyan, Farhad Uz Zaman

Published: 2025-12-25 14:53:40+00:00

AI Summary

This paper presents the first systematic benchmark for detecting Bengali deepfake audio, utilizing the BanglaFake dataset. The research evaluates both zero-shot inference with pretrained models and fine-tuning diverse deep learning architectures. The study confirms that fine-tuning is necessary for robust deepfake detection in this low-resource language, achieving significant performance gains over zero-shot methods.

Abstract

The rapid growth of speech synthesis and voice conversion systems has made deepfake audio a major security concern. Bengali deepfake detection remains largely unexplored. In this work, we study automatic detection of Bengali audio deepfakes using the BanglaFake dataset. We evaluate zeroshot inference with several pretrained models. These include Wav2Vec2-XLSR-53, Whisper, PANNsCNN14, WavLM and Audio Spectrogram Transformer. Zero-shot results show limited detection ability. The best model, Wav2Vec2-XLSR-53, achieves 53.80% accuracy, 56.60% AUC and 46.20% EER. We then f ine-tune multiple architectures for Bengali deepfake detection. These include Wav2Vec2-Base, LCNN, LCNN-Attention, ResNet18, ViT-B16 and CNN-BiLSTM. Fine-tuned models show strong performance gains. ResNet18 achieves the highest accuracy of 79.17%, F1 score of 79.12%, AUC of 84.37% and EER of 24.35%. Experimental results confirm that fine-tuning significantly improves performance over zero-shot inference. This study provides the first systematic benchmark of Bengali deepfake audio detection. It highlights the effectiveness of f ine-tuned deep learning models for this low-resource language.


Key findings
Zero-shot inference models showed very limited performance, with the best model (Wav2Vec2-XLSR-53) achieving only 53.80% accuracy. Fine-tuned models demonstrated strong gains, with ResNet18 achieving the highest performance at 79.17% accuracy, an F1 score of 79.12%, and an EER of 24.35%. This confirms the necessity of fine-tuning for robust detection in low-resource Bengali audio.
Approach
The authors employ transfer learning by comparing zero-shot inference using large pretrained models (e.g., Wav2Vec2-XLSR-53) against fine-tuning six different architectures on the BanglaFake dataset. Fine-tuning involved adapting CNN-based, Residual Network (ResNet), Transformer (ViT), and hybrid CNN-BiLSTM models, typically operating on audio spectrograms or raw audio embeddings.
Datasets
BanglaFake
Model(s)
Wav2Vec2-Base, LCNN, LCNN-Attention, ResNet18, ViT-B16, CNN-BiLSTM
Author countries
Bangladesh