Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes
Authors: Li Qiwei, Shihui Zhang, Andrew Timothy Kasper, Joshua Ashkinaze, Asia A. Eaton, Sarita Schoenebeck, Eric Gilbert
Published: 2024-09-18 17:01:48+00:00
Comment: under review
AI Summary
This audit study investigates the efficacy of reporting non-consensual intimate media (NCIM) on X (formerly Twitter) using either the platform's non-consensual nudity policy or a copyright infringement claim. Researchers uploaded 50 AI-generated nude images and reported them via these two mechanisms. The study found a stark contrast in effectiveness, with copyright claims leading to rapid content removal while non-consensual nudity reports were entirely ineffective.
Abstract
Non-consensual intimate media (NCIM) inflicts significant harm. Currently, victim-survivors can use two mechanisms to report NCIM - as a non-consensual nudity violation or as copyright infringement. We conducted an audit study of takedown speed of NCIM reported to X (formerly Twitter) of both mechanisms. We uploaded 50 AI-generated nude images and reported half under X's non-consensual nudity reporting mechanism and half under its copyright infringement mechanism. The copyright condition resulted in successful image removal within 25 hours for all images (100% removal rate), while non-consensual nudity reports resulted in no image removal for over three weeks (0% removal rate). We stress the need for targeted legislation to regulate NCIM removal online. We also discuss ethical considerations for auditing NCIM on social platforms.