Unleashing Vision-Language Semantics for Deepfake Video Detection

Authors: Jiawen Zhu, Yunqi Miao, Xueyi Zhang, Jiankang Deng, Guansong Pang

Published: 2026-03-25 16:05:35+00:00

Comment: 14 pages, 7 figures, accepted by CVPR 2026

AI Summary

The paper introduces VLAForge, a novel framework for deepfake video detection that harnesses the rich vision-language semantics embedded in pre-trained Vision-Language Models (VLMs). VLAForge enhances visual perception through a ForgePerceiver, which captures diverse, subtle forgery cues, and an Identity-Aware VLA Scoring module, which provides complementary discriminative cues via identity prior-informed text prompting. This approach significantly outperforms state-of-the-art methods across various deepfake benchmarks, demonstrating superior generalization capabilities.

Abstract

Recent Deepfake Video Detection (DFD) studies have demonstrated that pre-trained Vision-Language Models (VLMs) such as CLIP exhibit strong generalization capabilities in detecting artifacts across different identities. However, existing approaches focus on leveraging visual features only, overlooking their most distinctive strength -- the rich vision-language semantics embedded in the latent space. We propose VLAForge, a novel DFD framework that unleashes the potential of such cross-modal semantics to enhance model's discriminability in deepfake detection. This work i) enhances the visual perception of VLM through a ForgePerceiver, which acts as an independent learner to capture diverse, subtle forgery cues both granularly and holistically, while preserving the pretrained Vision-Language Alignment (VLA) knowledge, and ii) provides a complementary discriminative cue -- Identity-Aware VLA score, derived by coupling cross-modal semantics with the forgery cues learned by ForgePerceiver. Notably, the VLA score is augmented by an identity prior-informed text prompting to capture authenticity cues tailored to each identity, thereby enabling more discriminative cross-modal semantics. Comprehensive experiments on video DFD benchmarks, including classical face-swapping forgeries and recent full-face generation forgeries, demonstrate that our VLAForge substantially outperforms state-of-the-art methods at both frame and video levels. Code is available at https://github.com/mala-lab/VLAForge.


Key findings
VLAForge consistently and substantially outperforms 16 state-of-the-art methods across nine diverse deepfake video benchmarks, including both classical face-swapping and challenging full-face generation forgeries. It achieves significant AUROC improvements at both frame and video levels, demonstrating superior generalization and discriminative power under cross-dataset settings. The ablation studies confirm that each component, especially the complementary identity-aware and artifact-diverse priors, contributes to the enhanced robustness and performance.
Approach
VLAForge addresses deepfake detection by enhancing Vision-Language Models (VLMs) with two core components. The ForgePerceiver acts as an independent learner to capture diverse and subtle forgery cues both granularly and holistically. Concurrently, the Identity-Aware VLA Scoring module generates discriminative patch-level authenticity cues by aligning identity prior-informed text prompts with visual forgery cues, fusing this with the forgery localization map from ForgePerceiver.
Datasets
FaceForensics++ (FF++), CelebDF v1 (CDF-v1), CelebDF v2 (CDF-v2), Deepfake Detection Challenge (DFDC), DeepfakeDetection (DFD), VQGAN, StyleGAN-XL (StyleGAN), SiT-XL/2 (SiT), DiT, PixArt (last five sourced from DF40 dataset).
Model(s)
CLIP (VLM backbone, specifically OpenCLIP with ViT-L/14), ForgePerceiver (lightweight ViT architecture).
Author countries
Singapore, UK