Does Human Collaboration Enhance the Accuracy of Identifying LLM-Generated Deepfake Texts?

Authors: Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao 'Kenneth' Huang, Dongwon Lee

Published: 2023-04-03 14:06:47+00:00

AI Summary

This research investigates whether human collaboration improves the accuracy of identifying Large Language Model (LLM)-generated deepfake texts. Experiments with non-expert and expert groups showed that collaboration significantly increased detection accuracy, with improvements ranging from 6.36% to 12.76%.

Abstract

Advances in Large Language Models (e.g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts. However, this progress poses security and privacy concerns, necessitating effective solutions for distinguishing deepfake texts from human-written ones. Although prior works studied humans' ability to detect deepfake texts, none has examined whether collaboration among humans improves the detection of deepfake texts. In this study, to address this gap of understanding on deepfake texts, we conducted experiments with two groups: (1) nonexpert individuals from the AMT platform and (2) writing experts from the Upwork platform. The results demonstrate that collaboration among humans can potentially improve the detection of deepfake texts for both groups, increasing detection accuracies by 6.36% for non-experts and 12.76% for experts, respectively, compared to individuals' detection accuracies. We further analyze the explanations that humans used for detecting a piece of text as deepfake text, and find that the strongest indicator of deepfake texts is their lack of coherence and consistency. Our study provides useful insights for future tools and framework designs to facilitate the collaborative human detection of deepfake texts. The experiment datasets and AMT implementations are available at: https://github.com/huashen218/llm-deepfake-human-study.git


Key findings
Collaboration significantly improved deepfake text detection accuracy for both non-expert and expert groups. Coherence and self-contradiction were identified as strong indicators of deepfake texts, while grammar errors and repetition were weak indicators. Experts consistently outperformed non-experts.
Approach
The study used a human-in-the-loop approach. Participants (non-experts from AMT and experts from Upwork) individually and collaboratively identified LLM-generated paragraphs within multi-authored articles. Their explanations for identifying deepfakes were analyzed.
Datasets
A dataset of 50 three-paragraph articles was created. Two paragraphs were human-written and one was generated using GPT-2 XL, with paragraph placement randomized. The dataset is available on GitHub.
Model(s)
GPT-2 XL (1.5 billion parameters) was used to generate deepfake text paragraphs. BERT-base was used as a Masked Language Model to select the best fitting GPT-2 generated paragraph.
Author countries
USA