Deepfakes: we need to re-think the concept of real images
Authors: Janis Keuper, Margret Keuper
Published: 2025-09-26 04:40:13+00:00
AI Summary
nan
Abstract
The wide availability and low usability barrier of modern image generation models has triggered the reasonable fear of criminal misconduct and negative social implications. The machine learning community has been engaging this problem with an extensive series of publications proposing algorithmic solutions for the detection of fake, e.g. entirely generated or partially manipulated images. While there is undoubtedly some progress towards technical solutions of the problem, we argue that current and prior work is focusing too much on generative algorithms and fake data-samples, neglecting a clear definition and data collection of real images. The fundamental question what is a real image? might appear to be quite philosophical, but our analysis shows that the development and evaluation of basically all current fake-detection methods is relying on only a few, quite old low-resolution datasets of real images like ImageNet. However, the technology for the acquisition of real images, aka taking photos, has drastically evolved over the last decade: Today, over 90% of all photographs are produced by smartphones which typically use algorithms to compute an image from multiple inputs (over time) from multiple sensors. Based on the fact that these image formation algorithms are typically neural network architectures which are closely related to fake-image generators, we state the position that today, we need to re-think the concept of real images. The purpose of this position paper is to raise the awareness of the current shortcomings in this active field of research and to trigger an open discussion whether the detection of fake images is a sound objective at all. At the very least, we need a clear technical definition of real images and new benchmark datasets.