Investigating self-supervised representations for audio-visual deepfake detection

Authors: Dragos-Alexandru Boldisor, Stefan Smeu, Dan Oneata, Elisabeta Oneata

Published: 2025-11-21 12:04:00+00:00

AI Summary

This paper systematically evaluates the potential of diverse self-supervised representations (SSRs) for audio-visual deepfake detection, assessing their effectiveness, interpretability, and cross-modal complementarity. The authors find that most SSRs capture complementary, deepfake-relevant information and that models attend to semantically meaningful regions rather than spurious artifacts. However, a significant generalization gap across different datasets remains, highlighting challenges in achieving robust cross-domain performance.

Abstract

Self-supervised representations excel at many vision and speech tasks, but their potential for audio-visual deepfake detection remains underexplored. Unlike prior work that uses these features in isolation or buried within complex architectures, we systematically evaluate them across modalities (audio, video, multimodal) and domains (lip movements, generic visual content). We assess three key dimensions: detection effectiveness, interpretability of encoded information, and cross-modal complementarity. We find that most self-supervised features capture deepfake-relevant information, and that this information is complementary. Moreover, models primarily attend to semantically meaningful regions rather than spurious artifacts. Yet none generalize reliably across datasets. This generalization failure likely stems from dataset characteristics, not from the features themselves latching onto superficial patterns. These results expose both the promise and fundamental challenges of self-supervised representations for deepfake detection: while they learn meaningful patterns, achieving robust cross-domain performance remains elusive.


Key findings
SSRs yield strong in-domain results, particularly audio-informed features showing the best transferability across datasets. Anomaly detection using complementary feature combinations (e.g., AV-HuBERT A+V synchronization) improves robustness compared to supervised methods. Despite capturing meaningful cues, a substantial generalization failure persists when testing across different deepfake datasets.
Approach
The study uses linear probing on frozen SSR backbones to measure encoded information, along with temporal and spatial explanation techniques for interpretability. Robustness is further assessed using unsupervised proxy tasks, namely next-token prediction and audio-video synchronization, trained exclusively on real data.
Datasets
FakeAVCeleb (FAVC), AV-Deepfake1M (AV1M), DeepfakeEval 2024 (DFE-2024), AVLips
Model(s)
Wav2Vec XLS-R 2B, AV-HuBERT, Auto-AVSR, CLIP ViT-L/14, FSFM, Video-MAE-large
Author countries
Romania