Human-AI Ensembles Improve Deepfake Detection in Low-to-Medium Quality Videos
Authors: Marco Postiglione, Isabel Gortner, V. S. Subrahmanian
Published: 2026-03-15 23:25:34+00:00
AI Summary
This paper evaluates human and AI deepfake detection capabilities across varying video qualities, finding that humans significantly outperform state-of-the-art AI detectors, particularly in low-to-medium quality videos. It demonstrates that human and AI errors are complementary, allowing hybrid human-AI ensembles to achieve superior accuracy and eliminate high-confidence errors. The findings suggest that effective real-world deepfake detection, especially for non-professionally produced videos, necessitates human-AI collaboration.
Abstract
Deepfake detection is widely framed as a machine learning problem, yet how humans and AI detectors compare under realistic conditions remains poorly understood. We evaluate 200 human participants and 95 state-of-the-art AI detectors across two datasets: DF40, a standard benchmark, and CharadesDF, a novel dataset of videos of everyday activities. CharadesDF was recorded using mobile phones leading to low/moderate quality videos compared to the more professionally captured DF40. Humans outperform AI detectors on both datasets, with the gap widening in the case of CharadesDF where AI accuracy collapses to near chance (0.537) while humans maintain robust performance (0.784). Human and AI errors are complementary: humans miss high-quality deepfakes while AI detectors flag authentic videos as fake, and hybrid human-AI ensembles reduce high-confidence errors. These findings suggest that effective real-world deepfake detection, especially in non-professionally produced videos, requires human-AI collaboration rather than AI algorithms alone.