The Verification Crisis: Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance

Authors: Alexander Loth, Martin Kappes, Marc-Oliver Pahl

Published: 2026-02-02 13:45:12+00:00

Comment: Accepted at ACM TheWebConf '26 Companion

AI Summary

This paper presents findings from an expert perception survey (N=21) on the perceived severity of multimodal Generative AI (GenAI) disinformation threats and the efficacy of current mitigation strategies. It highlights that while deepfake video has immediate shock value, large-scale text generation poses a systemic risk. The study advocates for reproducible provenance standards and regulatory frameworks over opaque technical detection tools, emphasizing the necessity of reproducible methods in GenAI disinformation research.

Abstract

The growth of Generative Artificial Intelligence (GenAI) has shifted disinformation production from manual fabrication to automated, large-scale manipulation. This article presents findings from the first wave of a longitudinal expert perception survey (N=21) involving AI researchers, policymakers, and disinformation specialists. It examines the perceived severity of multimodal threats -- text, image, audio, and video -- and evaluates current mitigation strategies. Results indicate that while deepfake video presents immediate shock value, large-scale text generation poses a systemic risk of epistemic fragmentation and synthetic consensus, particularly in the political domain. The survey reveals skepticism about technical detection tools, with experts favoring provenance standards and regulatory frameworks despite implementation barriers. GenAI disinformation research requires reproducible methods. The current challenge is measurement: without standardized benchmarks and reproducibility checklists, tracking or countering synthetic media remains difficult. We propose treating information integrity as an infrastructure with rigor in data provenance and methodological reproducibility.


Key findings
Experts rated deepfake video highest for 'shock value,' but identified large-scale text generation as the most significant systemic threat due to its potential for 'epistemic fragmentation' and 'synthetic consensus.' There is widespread skepticism regarding the effectiveness of technical detection tools, with experts favoring provenance standards (like C2PA) and regulatory frameworks. The paper concludes that establishing reproducible provenance and rigorous methodological standards is critical for building trustworthy defenses against GenAI disinformation.
Approach
The authors conducted a longitudinal expert perception survey (N=21) involving AI researchers, policymakers, and disinformation specialists. They analyzed expert perceptions of multimodal GenAI threats (text, image, audio, video) and the effectiveness of various mitigation strategies, using these insights to argue for reproducible provenance and methodological rigor as core solutions to the 'verification crisis'.
Datasets
Expert perception survey responses (N=21) collected between July and December 2025, hosted on Google Forms.
Model(s)
UNKNOWN
Author countries
Germany, France