FakeParts: a New Family of AI-Generated DeepFakes

Authors: Gaetan Brison, Soobash Daiboo, Samy Aimeur, Awais Hussain Sani, Xi Wang, Gianni Franchi, Vicky Kalogeiton

Published: 2025-08-28 17:55:14+00:00

AI Summary

This paper introduces FakeParts, a new class of deepfakes involving subtle, localized video manipulations, and FakePartsBench, a large-scale benchmark dataset (25K+ videos) designed to evaluate detection methods for these challenging deepfakes. The authors demonstrate that both humans and state-of-the-art models struggle to detect FakeParts, highlighting a critical vulnerability in current deepfake detection approaches.

Abstract

We introduce FakeParts, a new class of deepfakes characterized by subtle, localized manipulations to specific spatial regions or temporal segments of otherwise authentic videos. Unlike fully synthetic content, these partial manipulations, ranging from altered facial expressions to object substitutions and background modifications, blend seamlessly with real elements, making them particularly deceptive and difficult to detect. To address the critical gap in detection capabilities, we present FakePartsBench, the first large-scale benchmark dataset specifically designed to capture the full spectrum of partial deepfakes. Comprising over 25K videos with pixel-level and frame-level manipulation annotations, our dataset enables comprehensive evaluation of detection methods. Our user studies demonstrate that FakeParts reduces human detection accuracy by over 30% compared to traditional deepfakes, with similar performance degradation observed in state-of-the-art detection models. This work identifies an urgent vulnerability in current deepfake detection approaches and provides the necessary resources to develop more robust methods for partial video manipulations.


Key findings
Both human and automated deepfake detection methods show significantly reduced accuracy when detecting FakeParts compared to traditional deepfakes. The most subtle manipulations were often the hardest to detect, highlighting a critical vulnerability in current approaches. CLIP-based models performed better on FakeParts, while traditional models performed better on full deepfakes.
Approach
The authors created FakePartsBench, a new dataset containing various types of partially manipulated videos (FakeParts) generated using multiple state-of-the-art models. They then evaluated existing state-of-the-art deepfake detection models on this dataset, comparing their performance against human evaluation.
Datasets
FakePartsBench (created by the authors), DAVIS 2016, DAVIS 2017, YouTube-VOS 2019, MOSE, LVD-2M, Celeb-DF, CelebA, Animal Kingdom
Model(s)
CNNDetection, UnivFD, NPR, FatFormer, C2P-CLIP, DeMamba, AIGVDet, Sora, Veo2, Allegro AI, Framer, RAVE, DiffuEraser, ProPainter, InsightFace, AKiRa
Author countries
France