Beyond Static Artifacts: A Forensic Benchmark for Video Deepfake Reasoning in Vision Language Models

Authors: Zheyuan Gu, Qingsong Zhao, Yusong Wang, Zhaohong Huang, Xinqi Li, Cheng Yuan, Jiaowei Shao, Chi Zhang, Xuelong Li

Published: 2026-02-25 10:54:55+00:00

Comment: 16 pages, 9 figures. Submitted to CVPR 2026

AI Summary

This paper introduces Forensic Answer-Questioning (FAQ), a large-scale benchmark designed to train and evaluate Vision-Language Models (VLMs) on temporal inconsistencies in video deepfakes. FAQ frames deepfake analysis as a multiple-choice task with a three-level hierarchy, covering facial perception, temporal grounding, and forensic reasoning. Models fine-tuned on the derived FAQ-IT dataset demonstrate enhanced performance in detecting deepfakes across various benchmarks.

Abstract

Current Vision-Language Models (VLMs) for deepfake detection excel at identifying spatial artifacts but overlook a critical dimension: temporal inconsistencies in video forgeries. Adapting VLMs to reason about these dynamic cues remains a distinct challenge. To bridge this gap, we propose Forensic Answer-Questioning (FAQ), a large-scale benchmark that formulates temporal deepfake analysis as a multiple-choice task. FAQ introduces a three-level hierarchy to progressively evaluate and equip VLMs with forensic capabilities: (1) Facial Perception, testing the ability to identify static visual artifacts; (2) Temporal Deepfake Grounding, requiring the localization of dynamic forgery artifacts across frames; and (3) Forensic Reasoning, challenging models to synthesize evidence for final authenticity verdicts. We evaluate a range of VLMs on FAQ and generate a corresponding instruction-tuning set, FAQ-IT. Extensive experiments show that models fine-tuned on FAQ-IT achieve advanced performance on both in-domain and cross-dataset detection benchmarks. Ablation studies further validate the impact of our key design choices, confirming that FAQ is the driving force behind the temporal reasoning capabilities of these VLMs.


Key findings
Zero-shot evaluation revealed a significant capability gap in VLMs for temporal deepfake detection, with models struggling particularly at higher reasoning levels. Fine-tuning VLMs on the FAQ-IT dataset led to substantial performance gains (up to 48.8% average accuracy increase for LLaVA-NeXT), enhancing both in-domain and cross-dataset deepfake detection capabilities, especially in temporal reasoning. The carefully designed hierarchical QA structure and temporal focus of FAQ were crucial for equipping VLMs with these advanced forensic capabilities.
Approach
The authors propose FAQ, a multi-choice question (MCQ) benchmark for video deepfake analysis, structured into three hierarchical levels: Facial Perception, Temporal Deepfake Grounding, and Forensic Reasoning. They generate an instruction-tuning set, FAQ-IT, from this benchmark to fine-tune Vision-Language Models (VLMs), guiding them to learn to perceive, localize, and reason about dynamic forgery artifacts. The QA pairs are generated through a semi-automated pipeline that leverages human annotations for spatiotemporal forgery trajectories.
Datasets
Forensic Answer-Questioning (FAQ), FAQ-IT (instruction-tuning set), FaceForensics++ (FF++), Celeb-DF (CDF), DeeperForensics (DFo), WildDeepfake (WDF)
Model(s)
InternVL-Chat-V1.2, LLaVA-1.5, DeepSeek-VL, LLaVA-InternLM2, ShareGPT4V, InternVL2, InternVideo2.5, ShareGPT4Video, Qwen3-VL, LLaVA-NeXT, Qwen2.5-VL, GPT-4o, Gemini-2.5-Flash (for evaluation); Qwen2.5-VL-7B, LLaVA-NeXT-7B (for fine-tuning)
Author countries
China