Conditional Uncertainty-Aware Political Deepfake Detection with Stochastic Convolutional Neural Networks

Authors: Rafael-Petruţ Gardoş

Published: 2026-02-10 22:31:18+00:00

Comment: 21 pages, 12 figures, 18 tables

AI Summary

This work investigates conditional, uncertainty-aware political deepfake detection using stochastic convolutional neural networks. It moves beyond point predictions to evaluate uncertainty through observable criteria like calibration quality and alignment with prediction errors, particularly in high-stakes political contexts. The study constructs a politically focused image dataset and compares deterministic inference with various uncertainty estimation methods.

Abstract

Recent advances in generative image models have enabled the creation of highly realistic political deepfakes, posing risks to information integrity, public trust, and democratic processes. While automated deepfake detectors are increasingly deployed in moderation and investigative pipelines, most existing systems provide only point predictions and fail to indicate when outputs are unreliable, being an operationally critical limitation in high-stakes political contexts. This work investigates conditional, uncertainty-aware political deepfake detection using stochastic convolutional neural networks within an empirical, decision-oriented reliability framework. Rather than treating uncertainty as a purely Bayesian construct, it is evaluated through observable criteria, including calibration quality, proper scoring rules, and its alignment with prediction errors under both global and confidence-conditioned analyses. A politically focused binary image dataset is constructed via deterministic metadata filtering from a large public real-synthetic corpus. Two pretrained CNN backbones (ResNet-18 and EfficientNet-B4) are fully fine-tuned for classification. Deterministic inference is compared with single-pass stochastic prediction, Monte Carlo dropout with multiple forward passes, temperature scaling, and ensemble-based uncertainty surrogates. Evaluation reports ROC-AUC, thresholded confusion matrices, calibration metrics, and generator-disjoint out-of-distribution performance. Results demonstrate that calibrated probabilistic outputs and uncertainty estimates enable risk-aware moderation policies. A systematic confidence-band analysis further clarifies when uncertainty provides operational value beyond predicted confidence, delineating both the benefits and limitations of uncertainty-aware deepfake detection in political settings.


Key findings
Calibrated probabilistic outputs and uncertainty estimates enable risk-aware moderation policies, improving reliability without degrading discriminative performance. Single-pass stochastic inference yields comparable or stronger calibration improvements than Monte Carlo dropout, suggesting noise-induced smoothing plays a significant role. Uncertainty provides operational value by stratifying residual risk in high-confidence predictions, rather than offering global error-detection at all confidence levels.
Approach
The authors propose a conditional, uncertainty-aware political deepfake detection method using stochastic convolutional neural networks. They evaluate uncertainty not as a Bayesian construct, but empirically through calibration quality, proper scoring rules, and its alignment with prediction errors under global and confidence-conditioned analyses. Various inference procedures like single-pass stochastic prediction, Monte Carlo dropout, temperature scaling, and ensemble surrogates are compared.
Datasets
OpenFake (filtered to create a politically focused binary image dataset)
Model(s)
ResNet-18, EfficientNet-B4
Author countries
UNKNOWN