Deepfakes at Face Value: Image and Authority

Authors: James Ravi Kirkpatrick

Published: 2026-04-14 09:16:45+00:00

Comment: 21 pages, accepted copy published in AI & Society (2026)

AI Summary

This paper argues that existing accounts of deepfake wrongfulness, which primarily focus on harm, are incomplete. It introduces the 'Authority Interest View,' asserting that deepfakes are wrong when they subvert an individual's legitimate interest in having authority over the permissible uses of their image and the governance of their identity. The paper posits a specific right against the algorithmic conscription of one's identity, distinguishing between permissible forms of appropriation and wrongful algorithmic simulation.

Abstract

Deepfakes are synthetic media that superimpose or generate someone's likeness on to pre-existing sound, images, or videos using deep learning methods. Existing accounts of the wrongs involved in creating and distributing deepfakes focus on the harms they cause or the non-normative interests they violate. However, these approaches do not explain how deepfakes can be wrongful even when they cause no harm or set back any other non-normative interest. To address this issue, this paper identifies a neglected reason why deepfakes are wrong: they can subvert our legitimate interests in having authority over the permissible uses of our image and the governance of our identity. We argue that deepfakes are wrong when they usurp our authority to determine the provenance of our own agency by exploiting our biometric features as a generative resource. In particular, we have a specific right against the algorithmic conscription of our identity. We refine the scope of this interest by distinguishing between permissible forms of appropriation, such as artistic depiction, from wrongful algorithmic simulation.


Key findings
The key finding is the articulation and defense of the 'Authority Interest View,' which posits that deepfakes are wrong due to violating an individual's authority over their image and identity, even in the absence of tangible harm. This framework introduces the concept of a 'right against algorithmic conscription' and differentiates wrongful simulation from permissible artistic depiction or private mental imaginings.
Approach
The paper adopts a philosophical and ethical approach, presenting a normative argument rather than a technical solution. It develops the 'Authority Interest View,' which proposes that deepfakes are morally wrong because they violate an individual's fundamental interest in authorship and authority over their biometric identity, specifically a 'right against algorithmic conscription.'
Datasets
UNKNOWN
Model(s)
UNKNOWN
Author countries
United Kingdom