Deepfakes at Face Value: Image and Authority
Authors: James Ravi Kirkpatrick
Published: 2026-04-14 09:16:45+00:00
Comment: 21 pages, accepted copy published in AI & Society (2026)
AI Summary
This paper argues that existing accounts of deepfake wrongfulness, which primarily focus on harm, are incomplete. It introduces the 'Authority Interest View,' asserting that deepfakes are wrong when they subvert an individual's legitimate interest in having authority over the permissible uses of their image and the governance of their identity. The paper posits a specific right against the algorithmic conscription of one's identity, distinguishing between permissible forms of appropriation and wrongful algorithmic simulation.
Abstract
Deepfakes are synthetic media that superimpose or generate someone's likeness on to pre-existing sound, images, or videos using deep learning methods. Existing accounts of the wrongs involved in creating and distributing deepfakes focus on the harms they cause or the non-normative interests they violate. However, these approaches do not explain how deepfakes can be wrongful even when they cause no harm or set back any other non-normative interest. To address this issue, this paper identifies a neglected reason why deepfakes are wrong: they can subvert our legitimate interests in having authority over the permissible uses of our image and the governance of our identity. We argue that deepfakes are wrong when they usurp our authority to determine the provenance of our own agency by exploiting our biometric features as a generative resource. In particular, we have a specific right against the algorithmic conscription of our identity. We refine the scope of this interest by distinguishing between permissible forms of appropriation, such as artistic depiction, from wrongful algorithmic simulation.