Through the Lens: Benchmarking Deepfake Detectors Against Moiré-Induced Distortions

Authors: Razaib Tariq, Minji Heo, Simon S. Woo, Shahroz Tariq

Published: 2025-10-27 11:23:04+00:00

AI Summary

This study systematically benchmarks 15 state-of-the-art deepfake detectors against Moiré-induced distortions, which commonly occur when deepfake videos are captured from digital screens in real-world settings. The authors introduce the DeepMoiréFake (DMF) dataset, created by manually recapturing videos from five existing datasets under diverse conditions, alongside experiments using synthetic Moiré patterns. The research demonstrates a significant vulnerability in current detectors to these visual artifacts and highlights the complexity of developing robust deepfake detection systems for practical applications.

Abstract

Deepfake detection remains a pressing challenge, particularly in real-world settings where smartphone-captured media from digital screens often introduces Moir\\'e artifacts that can distort detection outcomes. This study systematically evaluates state-of-the-art (SOTA) deepfake detectors on Moir\\'e-affected videos, an issue that has received little attention. We collected a dataset of 12,832 videos, spanning 35.64 hours, from the Celeb-DF, DFD, DFDC, UADFV, and FF++ datasets, capturing footage under diverse real-world conditions, including varying screens, smartphones, lighting setups, and camera angles. To further examine the influence of Moir\\'e patterns on deepfake detection, we conducted additional experiments using our DeepMoir\\'eFake, referred to as (DMF) dataset and two synthetic Moir\\'e generation techniques. Across 15 top-performing detectors, our results show that Moir\\'e artifacts degrade performance by as much as 25.4%, while synthetically generated Moir\\'e patterns lead to a 21.4% drop in accuracy. Surprisingly, demoir\\'eing methods, intended as a mitigation approach, instead worsened the problem, reducing accuracy by up to 17.2%. These findings underscore the urgent need for detection models that can robustly handle Moir\\'e distortions alongside other realworld challenges, such as compression, sharpening, and blurring. By introducing the DMF dataset, we aim to drive future research toward closing the gap between controlled experiments and practical deepfake detection.


Key findings
Moiré artifacts significantly degrade detector performance, causing accuracy drops up to 25.4% for authentic Moiré and 21.4% for synthetic Moiré patterns. Surprisingly, mitigation approaches using demoiréing methods exacerbated the problem, reducing accuracy by up to 17.2% because these techniques inadvertently removed subtle deepfake artifacts crucial for detection alongside the Moiré noise.
Approach
The researchers created the DeepMoiréFake (DMF) dataset by physically recording existing deepfake videos (from Celeb-DF, DFD, DFDC, UADFV, and FF++) displayed on various screens using different smartphones and lighting setups to induce authentic Moiré patterns. They benchmarked 15 SOTA deepfake detectors across Authentic Moiré (CMPA), Synthetic Moiré (SMPA), and Compression Attacks (CA), also evaluating several demoiréing methods as potential mitigation strategies.
Datasets
Celeb-DF, DFD, DFDC, UADFV, FF++, DeepMoiréFake (DMF).
Model(s)
Rossler (C23, C40), SelfBlended, ForgeryNet, Capsule-Forensics (Capsule), MAT, CADDM, CCViT, ADD, AltFreezing, FTCN, LRNet (BF, RF), LipForensics (Detectors); DMCNN, MBCNN, ESDNet, DDA, VD-Moiré, FPANet (Demoiréing/Defense methods).
Author countries
South Korea, Australia