REVEAL -- Reasoning and Evaluation of Visual Evidence through Aligned Language

Authors: Ipsita Praharaj, Yukta Butala, Yash Butala

Published: 2025-08-18 00:42:02+00:00

AI Summary

REVEAL is a prompt-driven framework for image forgery detection using vision-language models. It employs two approaches: holistic scene-level evaluation and region-wise anomaly detection, achieving dataset-agnostic performance without fine-tuning.

Abstract

The rapid advancement of generative models has intensified the challenge of detecting and interpreting visual forgeries, necessitating robust frameworks for image forgery detection while providing reasoning as well as localization. While existing works approach this problem using supervised training for specific manipulation or anomaly detection in the embedding space, generalization across domains remains a challenge. We frame this problem of forgery detection as a prompt-driven visual reasoning task, leveraging the semantic alignment capabilities of large vision-language models. We propose a framework, `REVEAL` (Reasoning and Evaluation of Visual Evidence through Aligned Language), that incorporates generalized guidelines. We propose two tangential approaches - (1) Holistic Scene-level Evaluation that relies on the physics, semantics, perspective, and realism of the image as a whole and (2) Region-wise anomaly detection that splits the image into multiple regions and analyzes each of them. We conduct experiments over datasets from different domains (Photoshop, DeepFake and AIGC editing). We compare the Vision Language Models against competitive baselines and analyze the reasoning provided by them.


Key findings
Structured prompts significantly improved performance over baseline prompts across various datasets. Region-wise prompts excelled at detecting localized edits, while holistic prompts captured global cues. The study demonstrated the effectiveness of prompt engineering for improving image forgery detection and interpretability.
Approach
REVEAL frames image forgery detection as a prompt-driven visual reasoning task. It uses two prompting strategies: holistic scene-level evaluation assessing image realism and region-wise anomaly detection analyzing image segments for inconsistencies. These strategies leverage the semantic alignment capabilities of large vision-language models.
Datasets
CASIA1+, Columbia, IMD2020, Coverage, FFHQ, FaceApp, Seq-DeepFake, and AIGC-Editing data from FakeShield
Model(s)
LLaVA, GPT-4o, GPT-4.1, Gemini
Author countries
USA