Testing Human Ability To Detect Deepfake Images of Human Faces

Authors: Sergi D. Bray, Shane D. Johnson, Bennett Kleinberg

Published: 2022-12-07 14:48:25+00:00

AI Summary

This study investigates human ability to detect deepfake images generated by StyleGAN2 and the effectiveness of simple interventions (familiarization and advice) to improve detection accuracy. Results show that overall detection accuracy was only slightly above chance, and interventions did not significantly improve performance.

Abstract

Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.


Key findings
Human accuracy in detecting StyleGAN2 deepfakes was only slightly above chance (around 62%). Simple interventions like familiarization and providing tell-tale signs did not significantly improve detection accuracy. Participant confidence was high but unrelated to accuracy.
Approach
The researchers conducted an online survey with 280 participants randomly assigned to four groups: a control group and three intervention groups (familiarization, one-time advice, and advice with reminders). Participants judged images as real or deepfake, reported confidence, and explained their reasoning. Accuracy and confidence were analyzed.
Datasets
Flickr-Faces-HQ (FFHQ) dataset for real images and StyleGAN2 trained on FFHQ for deepfake images.
Model(s)
StyleGAN2
Author countries
UK, Netherlands