GenConViT: Deepfake Video Detection Using Generative Convolutional Vision Transformer

Authors: Deressa Wodajo Deressa, Hannes Mareen, Peter Lambert, Solomon Atnafu, Zahid Akhtar, Glenn Van Wallendael

Published: 2023-07-13 19:27:40+00:00

AI Summary

This paper introduces GenConViT, a deepfake video detection model that combines ConvNeXt and Swin Transformer for feature extraction and Autoencoder/Variational Autoencoder for learning latent data distribution. By leveraging both visual artifacts and latent features, GenConViT achieves improved performance across diverse deepfake datasets.

Abstract

Deepfakes have raised significant concerns due to their potential to spread false information and compromise digital media integrity. Current deepfake detection models often struggle to generalize across a diverse range of deepfake generation techniques and video content. In this work, we propose a Generative Convolutional Vision Transformer (GenConViT) for deepfake video detection. Our model combines ConvNeXt and Swin Transformer models for feature extraction, and it utilizes Autoencoder and Variational Autoencoder to learn from the latent data distribution. By learning from the visual artifacts and latent data distribution, GenConViT achieves improved performance in detecting a wide range of deepfake videos. The model is trained and evaluated on DFDC, FF++, TM, DeepfakeTIMIT, and Celeb-DF (v$2$) datasets. The proposed GenConViT model demonstrates strong performance in deepfake video detection, achieving high accuracy across the tested datasets. While our model shows promising results in deepfake video detection by leveraging visual and latent features, we demonstrate that further work is needed to improve its generalizability, i.e., when encountering out-of-distribution data. Our model provides an effective solution for identifying a wide range of fake videos while preserving media integrity. The open-source code for GenConViT is available at https://github.com/erprogs/GenConViT.


Key findings
GenConViT demonstrates strong performance on multiple deepfake datasets, achieving high accuracy, F1-scores, and AUC values. However, an ablation study reveals limitations in generalizing to out-of-distribution data, particularly hyper-realistic deepfakes, highlighting a need for further research in improving domain adaptability.
Approach
GenConViT uses a two-network architecture. One network uses an Autoencoder and the other a Variational Autoencoder to learn latent data distributions, alongside ConvNeXt and Swin Transformer for visual feature extraction. The final prediction is an average of both networks' outputs.
Datasets
DFDC, FF++, TM, DeepfakeTIMIT, and Celeb-DF (v2)
Model(s)
ConvNeXt, Swin Transformer, Autoencoder, Variational Autoencoder
Author countries
Belgium, Ethiopia, USA