Can Current Detectors Catch Face-to-Voice Deepfake Attacks?

Authors: Nguyen Linh Bao Nguyen, Alsharif Abuadbba, Kristen Moore, Tingmin Wu

Published: 2025-10-23 21:24:55+00:00

Comment: 8 pages, Accepted at Workshop on AI for Cyber Threat Intelligence, co-located with ACSAC 2025

AI Summary

This paper investigates the effectiveness of state-of-the-art audio deepfake detectors against FOICE, a novel face-to-voice synthesis method that generates speech from a single facial image. The study reveals that current detectors consistently fail to identify FOICE-generated audio, highlighting a critical vulnerability. While fine-tuning on FOICE data significantly improves detection, it often leads to a detrimental trade-off, diminishing the detectors' robustness against unseen deepfake generators.

Abstract

The rapid advancement of generative models has enabled the creation of increasingly stealthy synthetic voices, commonly referred to as audio deepfakes. A recent technique, FOICE [USENIX'24], demonstrates a particularly alarming capability: generating a victim's voice from a single facial image, without requiring any voice sample. By exploiting correlations between facial and vocal features, FOICE produces synthetic voices realistic enough to bypass industry-standard authentication systems, including WeChat Voiceprint and Microsoft Azure. This raises serious security concerns, as facial images are far easier for adversaries to obtain than voice samples, dramatically lowering the barrier to large-scale attacks. In this work, we investigate two core research questions: (RQ1) can state-of-the-art audio deepfake detectors reliably detect FOICE-generated speech under clean and noisy conditions, and (RQ2) whether fine-tuning these detectors on FOICE data improves detection without overfitting, thereby preserving robustness to unseen voice generators such as SpeechT5. Our study makes three contributions. First, we present the first systematic evaluation of FOICE detection, showing that leading detectors consistently fail under both standard and noisy conditions. Second, we introduce targeted fine-tuning strategies that capture FOICE-specific artifacts, yielding significant accuracy improvements. Third, we assess generalization after fine-tuning, revealing trade-offs between specialization to FOICE and robustness to unseen synthesis pipelines. These findings expose fundamental weaknesses in today's defenses and motivate new architectures and training protocols for next-generation audio deepfake detection.


Key findings
State-of-the-art audio deepfake detectors consistently fail to reliably detect FOICE-generated speech under clean and noisy conditions, indicating a significant blind spot in current defenses. While fine-tuning on FOICE data leads to substantial improvements in in-distribution detection, it often causes a severe degradation in performance when tested against unseen synthesis pipelines like SpeechT5. Only models explicitly designed for domain invariance (e.g., Ren et al.) show improved generalization after fine-tuning, highlighting a critical trade-off between specialization and robustness.
Approach
The authors conduct a systematic evaluation of four state-of-the-art audio deepfake detectors on FOICE-generated speech under various conditions (clean, noisy, denoised). They then introduce targeted fine-tuning strategies on FOICE data and assess the fine-tuned models' performance on both FOICE and an unseen deepfake generator (SpeechT5) to analyze adaptation and generalization capabilities.
Datasets
VoxCeleb2, AVSpeech, FOICE (generated), SpeechT5 DS
Model(s)
AASIST [21], Ren et al. [15], Sun et al. [18], Temporal–Channel Modeling (TCM) [19]
Author countries
Australia