Leave No Stone Unturned: Uncovering Holistic Audio-Visual Intrinsic Coherence for Deepfake Detection

Authors: Jielun Peng, Yabin Wang, Yaqi Li, Long Kong, Xiaopeng Hong

Published: 2026-03-25 05:44:25+00:00

AI Summary

This paper introduces HAVIC, a novel Holistic Audio-Visual Intrinsic Coherence-based deepfake detector that learns comprehensive coherence priors within and across modalities through pre-training on authentic videos. It then employs holistic adaptive aggregation for robust deepfake detection. The authors also present HiFi-AVDF, a new high-fidelity audio-visual deepfake dataset, demonstrating HAVIC's superior performance and generalization on challenging cross-dataset scenarios.

Abstract

The rapid progress of generative AI has enabled hyper-realistic audio-visual deepfakes, intensifying threats to personal security and social trust. Most existing deepfake detectors rely either on uni-modal artifacts or audio-visual discrepancies, failing to jointly leverage both sources of information. Moreover, detectors that rely on generator-specific artifacts tend to exhibit degraded generalization when confronted with unseen forgeries. We argue that robust and generalizable detection should be grounded in intrinsic audio-visual coherence within and across modalities. Accordingly, we propose HAVIC, a Holistic Audio-Visual Intrinsic Coherence-based deepfake detector. HAVIC first learns priors of modality-specific structural coherence, inter-modal micro- and macro-coherence by pre-training on authentic videos. Based on the learned priors, HAVIC further performs holistic adaptive aggregation to dynamically fuse audio-visual features for deepfake detection. Additionally, we introduce HiFi-AVDF, a high-fidelity audio-visual deepfake dataset featuring both text-to-video and image-to-video forgeries from state-of-the-art commercial generators. Extensive experiments across several benchmarks demonstrate that HAVIC significantly outperforms existing state-of-the-art methods, achieving improvements of 9.39% AP and 9.37% AUC on the most challenging cross-dataset scenario. Our code and dataset are available at https://github.com/tuffy-studio/HAVIC.


Key findings
HAVIC significantly outperforms state-of-the-art methods across various datasets, achieving improvements of 9.39% AP and 9.37% AUC on the challenging cross-dataset scenario. It demonstrates strong generalization to high-fidelity audio-visual deepfakes, particularly on the newly introduced HiFi-AVDF dataset, by effectively modeling holistic audio-visual coherence.
Approach
The HAVIC framework operates in two stages: first, Holistic Coherence Priors Pre-training on authentic videos using masked autoencoding with three self-supervised objectives (modality-specific hierarchical reconstruction, fine-grained audio-visual contrastive, and cross-modal semantic reconstruction losses) to learn coherence priors. Second, in the Holistic Adaptive Aggregation Classification stage, an Adaptive Feature Aggregation module dynamically fuses hierarchical uni-modal and interaction-aware features for deepfake detection.
Datasets
LRS2, FakeAVCeleb, KoDF, HiFi-AVDF (newly introduced by authors)
Model(s)
HAVIC (Transformer encoders for audio and visual modalities, Audio-Visual Interaction Module, Adaptive Feature Aggregation module, modality-specific decoders, cross-modal semantic decoders). Pre-trained using AudioMAE and MARLIN.
Author countries
China