Beyond Semantics: Uncovering the Physics of Fakes via Universal Physical Descriptors for Cross-Modal Synthetic Detection

Authors: Mei Qiu, Jianqiang Zhao, Yanyun Qu

Published: 2026-04-06 11:50:29+00:00

AI Summary

This paper introduces a novel method for detecting AI-generated images by identifying universal physical descriptors. It explores 15 physical features across over 20 datasets, proposing a feature selection algorithm to pinpoint five core physical features that robustly discriminate synthetic images. These features are then encoded into text and integrated with semantic captions to guide CLIP's image-text representation learning, leading to state-of-the-art deepfake detection performance.

Abstract

The rapid advancement of AI generated content (AIGC) has blurred the boundaries between real and synthetic images, exposing the limitations of existing deepfake detectors that often overfit to specific generative models. This adaptability crisis calls for a fundamental reexamination of the intrinsic physical characteristics that distinguish natural from AI-generated images. In this paper, we address two critical research questions: (1) What physical features can stably and robustly discriminate AI generated images across diverse datasets and generative architectures? (2) Can these objective pixel-level features be integrated into multimodal models like CLIP to enhance detection performance while mitigating the unreliability of language-based information? To answer these questions, we conduct a comprehensive exploration of 15 physical features across more than 20 datasets generated by various GANs and diffusion models. We propose a novel feature selection algorithm that identifies five core physical features including Laplacian variance, Sobel statistics, and residual noise variance that exhibit consistent discriminative power across all tested datasets. These features are then converted into text encoded values and integrated with semantic captions to guide image text representation learning in CLIP. Extensive experiments demonstrate that our method achieves state-of-the-art performance on multiple Genimage benchmarks, with near-perfect accuracy (99.8%) on datasets such as Wukong and SDv1.4. By bridging pixel level authenticity with semantic understanding, this work pioneers the use of physically grounded features for trustworthy vision language modeling and opens new directions for mitigating hallucinations and textual inaccuracies in large multimodal models.


Key findings
The proposed method achieved state-of-the-art performance on multiple Genimage benchmarks, demonstrating near-perfect accuracy (99.8%) on datasets such as Wukong and SDv1.4. The integration of physically grounded features into captions significantly improved the model's ability to detect fake images and enhanced image-text alignment, leading to stronger cross-model generalization.
Approach
The authors identify core physical features (e.g., Laplacian variance, Sobel statistics) that stably distinguish AI-generated images. These features are converted into text-encoded values and merged with semantic captions to create 'enhanced captions'. A CLIP-based model (ViT-L/14 with LoRA) is then trained using these enhanced captions and class prompts to improve image deepfake detection and generalization.
Datasets
GenImage (Midjourney, SDv1.4, SDv1.5, ADM, GLIDE, Wukong, VQDM, BigGAN), UniversalFakeDetect (BigGAN, StarGAN, GauGAN, Deepfake, CRN, IMLE, Guided Diffusion, LDM, Glide, Dalle)
Model(s)
CLIP ViT-L/14 with LoRA, Logistic Regression, ClipCap
Author countries
China