ManipShield: A Unified Framework for Image Manipulation Detection, Localization and Explanation

Authors: Zitong Xu, Huiyu Duan, Xiaoyu Wang, Zhaolin Cai, Kaiwei Zhang, Qiang Hu, Jing Liu, Xiongkuo Min, Guangtao Zhai

Published: 2025-11-18 08:50:17+00:00

AI Summary

The paper introduces ManipBench, a large-scale benchmark of over 450K AI-edited images focusing on detection, localization, and interpretability via fine-grained annotations. Building on this dataset, they propose ManipShield, a unified Multimodal Large Language Model (MLLM) framework utilizing contrastive LoRA fine-tuning for simultaneous image manipulation detection, localization, and explanation. The method demonstrates state-of-the-art performance and strong generalization across diverse and unseen manipulation models.

Abstract

With the rapid advancement of generative models, powerful image editing methods now enable diverse and highly realistic image manipulations that far surpass traditional deepfake techniques, posing new challenges for manipulation detection. Existing image manipulation detection and localization (IMDL) benchmarks suffer from limited content diversity, narrow generative-model coverage, and insufficient interpretability, which hinders the generalization and explanation capabilities of current manipulation detection methods. To address these limitations, we introduce \\textbf{ManipBench}, a large-scale benchmark for image manipulation detection and localization focusing on AI-edited images. ManipBench contains over 450K manipulated images produced by 25 state-of-the-art image editing models across 12 manipulation categories, among which 100K images are further annotated with bounding boxes, judgment cues, and textual explanations to support interpretable detection. Building upon ManipBench, we propose \\textbf{ManipShield}, an all-in-one model based on a Multimodal Large Language Model (MLLM) that leverages contrastive LoRA fine-tuning and task-specific decoders to achieve unified image manipulation detection, localization, and explanation. Extensive experiments on ManipBench and several public datasets demonstrate that ManipShield achieves state-of-the-art performance and exhibits strong generality to unseen manipulation models. Both ManipBench and ManipShield will be released upon publication.


Key findings
ManipShield achieves state-of-the-art performance on ManipBench across detection, localization, and explanation tasks, significantly outperforming existing forensic models and MLLMs. The framework exhibits strong zero-shot generalization, maintaining high accuracy on images produced by unseen closed-source generative models (GPT-Image, NanoBanana, FLUX-Kontext). Ablation studies confirm that both contrastive LoRA tuning and the Layer Discrimination Selection module are crucial for maximizing overall performance.
Approach
ManipShield is an MLLM-based framework (InternVL3.5 backbone). It uses contrastive LoRA fine-tuning on the vision encoder for robust discrimination, followed by Layer Discrimination Selection (LDS) to identify the most informative LLM layer. This layer's hidden state is processed by task-specific decoders for unified classification, localization (bounding boxes), and explanatory text generation.
Datasets
ManipBench, CASIA2, IMD2020, DDFT, DF40
Model(s)
InternVL3.5 (MLLM Backbone), LoRA, ResNet50, Swin-T, PSCCNet, HifiNet, ChatGPT-4o, Gemini2.5-Pro, DeepSeekVL2, Qwen3-VL
Author countries
China