The Verification Crisis: Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance
Authors: Alexander Loth, Martin Kappes, Marc-Oliver Pahl
Published: 2026-02-02 13:45:12+00:00
Comment: Accepted at ACM TheWebConf '26 Companion
AI Summary
This paper presents findings from an expert perception survey (N=21) on the perceived severity of multimodal Generative AI (GenAI) disinformation threats and the efficacy of current mitigation strategies. It highlights that while deepfake video has immediate shock value, large-scale text generation poses a systemic risk. The study advocates for reproducible provenance standards and regulatory frameworks over opaque technical detection tools, emphasizing the necessity of reproducible methods in GenAI disinformation research.
Abstract
The growth of Generative Artificial Intelligence (GenAI) has shifted disinformation production from manual fabrication to automated, large-scale manipulation. This article presents findings from the first wave of a longitudinal expert perception survey (N=21) involving AI researchers, policymakers, and disinformation specialists. It examines the perceived severity of multimodal threats -- text, image, audio, and video -- and evaluates current mitigation strategies. Results indicate that while deepfake video presents immediate shock value, large-scale text generation poses a systemic risk of epistemic fragmentation and synthetic consensus, particularly in the political domain. The survey reveals skepticism about technical detection tools, with experts favoring provenance standards and regulatory frameworks despite implementation barriers. GenAI disinformation research requires reproducible methods. The current challenge is measurement: without standardized benchmarks and reproducibility checklists, tracking or countering synthetic media remains difficult. We propose treating information integrity as an infrastructure with rigor in data provenance and methodological reproducibility.