BERT Transformer model for Detecting Arabic GPT2 Auto-Generated Tweets

Authors: Fouzi Harrag, Maria Debbah, Kareem Darwish, Ahmed Abdelali

Published: 2021-01-22 21:50:38+00:00

AI Summary

This paper proposes a transfer learning model using AraBERT to detect Arabic deepfake tweets generated by GPT2-Small-Arabic. The model achieves an accuracy of up to 98%, outperforming various RNN baseline models and representing the first study combining ARABERT and GPT2 for Arabic deepfake text detection.

Abstract

During the last two decades, we have progressively turned to the Internet and social media to find news, entertain conversations and share opinion. Recently, OpenAI has developed a ma-chine learning system called GPT-2 for Generative Pre-trained Transformer-2, which can pro-duce deepfake texts. It can generate blocks of text based on brief writing prompts that look like they were written by humans, facilitating the spread false or auto-generated text. In line with this progress, and in order to counteract potential dangers, several methods have been pro-posed for detecting text written by these language models. In this paper, we propose a transfer learning based model that will be able to detect if an Arabic sentence is written by humans or automatically generated by bots. Our dataset is based on tweets from a previous work, which we have crawled and extended using the Twitter API. We used GPT2-Small-Arabic to generate fake Arabic Sentences. For evaluation, we compared different recurrent neural network (RNN) word embeddings based baseline models, namely: LSTM, BI-LSTM, GRU and BI-GRU, with a transformer-based model. Our new transfer-learning model has obtained an accuracy up to 98%. To the best of our knowledge, this work is the first study where ARABERT and GPT2 were combined to detect and classify the Arabic auto-generated texts.


Key findings
The AraBERT model achieved a 98.7% accuracy in detecting deepfake Arabic tweets, significantly outperforming the RNN baseline models. The superior performance is attributed to the AraBERT model's ability to leverage contextual information effectively.
Approach
The authors created a dataset of human-generated and GPT2-Small-Arabic generated Arabic tweets. They then compared the performance of several RNN models (LSTM, Bi-LSTM, GRU, Bi-GRU) against a fine-tuned AraBERT model for deepfake detection.
Datasets
A dataset of Arabic tweets, expanded from a previous dataset (Almerekhi and Elsayed, 2015) by crawling user timelines and supplemented with GPT2-Small-Arabic generated deepfake tweets.
Model(s)
LSTM, Bi-LSTM, GRU, Bi-GRU, AraBERT
Author countries
Algeria, Qatar