Spectral Masking and Interpolation Attack (SMIA): A Black-box Adversarial Attack against Voice Authentication and Anti-Spoofing Systems

Authors: Kamel Kamel, Hridoy Sankar Dutta, Keshav Sood, Sunil Aryal

Published: 2025-09-09 12:43:59+00:00

AI Summary

The paper proposes Spectral Masking and Interpolation Attack (SMIA), a black-box adversarial attack that manipulates inaudible frequency regions in AI-generated audio to bypass voice authentication systems (VAS) and anti-spoofing countermeasures (CMs). SMIA achieves high attack success rates against state-of-the-art models, demonstrating vulnerabilities in current voice authentication security.

Abstract

Voice Authentication Systems (VAS) use unique vocal characteristics for verification. They are increasingly integrated into high-security sectors such as banking and healthcare. Despite their improvements using deep learning, they face severe vulnerabilities from sophisticated threats like deepfakes and adversarial attacks. The emergence of realistic voice cloning complicates detection, as systems struggle to distinguish authentic from synthetic audio. While anti-spoofing countermeasures (CMs) exist to mitigate these risks, many rely on static detection models that can be bypassed by novel adversarial methods, leaving a critical security gap. To demonstrate this vulnerability, we propose the Spectral Masking and Interpolation Attack (SMIA), a novel method that strategically manipulates inaudible frequency regions of AI-generated audio. By altering the voice in imperceptible zones to the human ear, SMIA creates adversarial samples that sound authentic while deceiving CMs. We conducted a comprehensive evaluation of our attack against state-of-the-art (SOTA) models across multiple tasks, under simulated real-world conditions. SMIA achieved a strong attack success rate (ASR) of at least 82% against combined VAS/CM systems, at least 97.5% against standalone speaker verification systems, and 100% against countermeasures. These findings conclusively demonstrate that current security postures are insufficient against adaptive adversarial attacks. This work highlights the urgent need for a paradigm shift toward next-generation defenses that employ dynamic, context-aware frameworks capable of evolving with the threat landscape.


Key findings
SMIA achieved at least 82% attack success rate against combined VAS/CM systems, at least 97.5% against standalone speaker verification systems, and 100% against countermeasures. These results highlight the inadequacy of current security measures against adaptive adversarial attacks and the need for more robust, dynamic defenses.
Approach
SMIA strategically manipulates inaudible frequency regions of AI-generated audio using masking and interpolation techniques. This creates adversarial samples that sound authentic to humans but deceive VAS and CMs by introducing subtle, perceptually insignificant distortions.
Datasets
LibriSpeech, ASVSpoof 2019
Model(s)
RawNet2, RawGAT-ST, RawPC-Darts, X-Vectors, DeepSpeaker, Microsoft Azure SV
Author countries
Australia, India