Interpretable facial dynamics as behavioral and perceptual traces of deepfakes
Authors: Timothy Joseph Murphy, Jennifer Cook, Hélio Clemente José Cuve
Published: 2026-04-23 15:07:30+00:00
Comment: Main paper: 19 pages, 5 figures, 4 tables. SI Appendix: 11 pages, 3 figures, 6 tables
AI Summary
This study introduces an interpretable deepfake detection method based on bio-behavioral facial dynamics, specifically identifying core low-dimensional patterns of movement and deriving temporal features. Traditional machine learning classifiers trained on these features achieved significant deepfake classification, with detection being more accurate for emotive expressions. The research also compares model decisions with human perceptual judgments, revealing context-dependent convergence for emotive content but divergent underlying strategies.
Abstract
Deepfake detection research has largely converged on deep learning approaches that, despite strong benchmark performance, offer limited insight into what distinguishes real from manipulated facial behavior. This study presents an interpretable alternative grounded in bio-behavioral features of facial dynamics and evaluates how computational detection strategies relate to human perceptual judgments. We identify core low-dimensional patterns of facial movement, from which temporal features characterizing spatiotemporal structure were derived. Traditional machine learning classifiers trained on these features achieved modest but significant above-chance deepfake classification, driven by higher-order temporal irregularities that were more pronounced in manipulated than real facial dynamics. Notably, detection was substantially more accurate for videos containing emotive expressions than those without. An emotional valence classification analysis further indicated that emotive signals are systematically degraded in deepfakes, explaining the differential impact of emotive dynamics on detection. Furthermore, we provide an additional and often overlooked dimension of explainability by assessing the relationship between model decisions and human perceptual detection. Model and human judgments converged for emotive but diverged for non-emotive videos, and even where outputs aligned, underlying detection strategies differed. These findings demonstrate that face-swapped deepfakes carry a measurable behavioral fingerprint, most salient during emotional expression. Additionally, model-human comparisons suggest that interpretable computational features and human perception may offer complementary rather than redundant routes to detection.