Beyond Face Swapping: A Diffusion-Based Digital Human Benchmark for Multimodal Deepfake Detection

Authors: Jiaxin Liu, Jia Wang, Saihui Hou, Min Ren, Huijia Wu, Long Ma, Renwang Pei, Zhaofeng He

Published: 2025-05-22 10:46:37+00:00

AI Summary

This paper introduces DigiFakeAV, a new large-scale multimodal deepfake dataset generated by diffusion models, designed to benchmark detection strategies against highly realistic digital human forgeries. To address the severe challenges posed by this new data, they propose DigiShield, a detection baseline that employs spatiotemporal and cross-modal fusion. DigiShield achieves state-of-the-art performance on DigiFakeAV and shows strong generalization.

Abstract

In recent years, the explosive advancement of deepfake technology has posed a critical and escalating threat to public security: diffusion-based digital human generation. Unlike traditional face manipulation methods, such models can generate highly realistic videos with consistency via multimodal control signals. Their flexibility and covertness pose severe challenges to existing detection strategies. To bridge this gap, we introduce DigiFakeAV, the new large-scale multimodal digital human forgery dataset based on diffusion models. Leveraging five of the latest digital human generation methods and a voice cloning method, we systematically construct a dataset comprising 60,000 videos (8.4 million frames), covering multiple nationalities, skin tones, genders, and real-world scenarios, significantly enhancing data diversity and realism. User studies demonstrate that the misrecognition rate by participants for DigiFakeAV reaches as high as 68%. Moreover, the substantial performance degradation of existing detection models on our dataset further highlights its challenges. To address this problem, we propose DigiShield, an effective detection baseline based on spatiotemporal and cross-modal fusion. By jointly modeling the 3D spatiotemporal features of videos and the semantic-acoustic features of audio, DigiShield achieves state-of-the-art (SOTA) performance on the DigiFakeAV and shows strong generalization on other datasets.


Key findings
User studies reveal a high 68% misrecognition rate on DigiFakeAV, and existing detection models show substantial performance degradation, with some AUC scores dropping by over 40%. Their proposed DigiShield model achieves state-of-the-art performance on DigiFakeAV with an AUC of 80.1% and demonstrates strong generalization on other datasets, affirming the efficacy of spatiotemporal and cross-modal fusion.
Approach
DigiShield is a multimodal detection framework that captures dynamic inconsistencies between video and audio using a spatiotemporal two-stream pipeline and cross-modal attention mechanisms. It jointly models 3D spatiotemporal features of videos and semantic-acoustic features of audio, fusing them for classification via contrastive and cross-entropy losses.
Datasets
DigiFakeAV, HDTF, CelebV-HQ, FF++, Celeb-DF, DFDC, DF-TIMIT, FakeAVCeleb
Model(s)
DigiShield (with 3D-ResNet-50 backbone), Meso4, MesoInception4, Xception-c23, Capsule, HeadPose, F3-Net, Cross Efficient ViT, SSVF, SFIConv
Author countries
China