Generative Propaganda

Authors: Madeleine I. G. Daepp, Alejandro Cuevas, Robert Osazuwa Ness, Vickie Yu-Ping Wang, Bharat Kumar Nayak, Dibyendu Mishra, Ti-Chung Cheng, Shaily Desai, Joyojeet Pal

Published: 2025-09-23 15:27:00+00:00

AI Summary

This research paper investigates the real-world use of generative AI in shaping public opinion, focusing on interviews with creators and defenders of information in Taiwan and India. The study finds that the emphasis on deceptive deepfakes overshadows the broader use of AI for persuasion and distortion, revealing efficiency gains as a primary concern.

Abstract

Generative propaganda is the use of generative artificial intelligence (AI) to shape public opinion. To characterize its use in real-world settings, we conducted interviews with defenders (e.g., factcheckers, journalists, officials) in Taiwan and creators (e.g., influencers, political consultants, advertisers) as well as defenders in India, centering two places characterized by high levels of online propaganda. The term deepfakes, we find, exerts outsized discursive power in shaping defenders' expectations of misuse and, in turn, the interventions that are prioritized. To better characterize the space of generative propaganda, we develop a taxonomy that distinguishes between obvious versus hidden and promotional versus derogatory use. Deception was neither the main driver nor the main impact vector of AI's use; instead, Indian creators sought to persuade rather than to deceive, often making AI's use obvious in order to reduce legal and reputational risks, while Taiwan's defenders saw deception as a subset of broader efforts to distort the prevalence of strategic narratives online. AI was useful and used, however, in producing efficiency gains in communicating across languages and modes, and in evading human and algorithmic detection. Security researchers should reconsider threat models to clearly differentiate deepfakes from promotional and obvious uses, to complement and bolster the social factors that constrain misuse by internal actors, and to counter efficiency gains globally.


Key findings
The study reveals that the anticipated prevalence of deceptive deepfakes was far less than observed uses of generative AI for persuasion and distortion. Efficiency gains in reduced detectability, multimodality, and multilingualism were identified as key drivers of AI's misuse in political communication. Social factors, such as legal risks and reputational concerns, significantly constrained the use of adversarial deepfakes.
Approach
The researchers conducted 64 hours of interviews with 72 participants in Taiwan and India, analyzing the data using a two-phase abductive coding approach. They developed a taxonomy to categorize the observed uses of generative AI, distinguishing between obvious/hidden and promotional/derogatory uses.
Datasets
Interview data from 72 participants (35 in Taiwan and 37 in India), totaling 64 hours of interviews.
Model(s)
UNKNOWN
Author countries
USA, India, UK