Seeing Isn't Believing: Addressing the Societal Impact of Deepfakes in Low-Tech Environments

Authors: Azmine Toushik Wasi, Rahatun Nesa Priti, Mahir Absar Khan, Abdur Rahman, Mst Rafia Islam

Published: 2025-08-13 18:18:24+00:00

Comment: Accepted to ACM MM 2025 Workshop Diffusion of Harmful Content on Online Web (DHOW)

AI Summary

This work investigates public awareness, perceptions, and experiences with deepfakes in low-tech, resource-limited environments. Through a survey, the authors identify critical knowledge gaps and a lack of effective detection tools, leading to the proposal of a comprehensive framework for deepfake prevention, detection, and mitigation tailored for such settings. The paper offers actionable insights for supporting vulnerable populations and calls for further interdisciplinary efforts.

Abstract

Deepfakes, AI-generated multimedia content that mimics real media, are becoming increasingly prevalent, posing significant risks to political stability, social trust, and economic well-being, especially in developing societies with limited media literacy and technological infrastructure. This work aims to understand how these technologies are perceived and impact resource-limited communities. We conducted a survey to assess public awareness, perceptions, and experiences with deepfakes, leading to the development of a comprehensive framework for prevention, detection, and mitigation in tech-limited environments. Our findings reveal critical knowledge gaps and a lack of effective detection tools, emphasizing the need for targeted education and accessible verification solutions. This work offers actionable insights to support vulnerable populations and calls for further interdisciplinary efforts to tackle deepfake challenges globally, particularly in the Global South.


Key findings
The survey revealed significant public knowledge gaps regarding deepfakes and low confidence in individuals' ability to detect manipulated content, with a large majority expressing concern over deepfake misuse. These findings highlight the urgent need for targeted public education, accessible AI-powered verification tools, and robust regulatory frameworks to protect vulnerable populations in low-tech environments.
Approach
The authors conducted a survey with 73 participants in Bangladesh to assess public awareness, perceptions, and experiences with deepfakes in tech-limited environments. Based on the survey findings and existing literature, they developed a three-stage framework: Prevention, Detection, and Mitigation, providing an end-to-end solution to combat deepfake challenges in resource-constrained settings.
Datasets
73 responses collected from a survey of participants in Bangladesh.
Model(s)
UNKNOWN
Author countries
Bangladesh