Sharp Multiple Instance Learning for DeepFake Video Detection

Authors: Xiaodan Li, Yining Lang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Shuhui Wang, Hui Xue, Quan Lu

Published: 2020-08-11 08:52:17+00:00

Comment: Accepted at ACM MM 2020. 11 pages, 8 figures, with appendix

Journal Ref: Proceedings of the 28th ACM International Conference on Multimedia, 2020

AI Summary

This paper introduces a new problem of partial face attacks in DeepFake videos, where only video-level labels are available but not all faces are manipulated. To address this, the authors propose Sharp Multiple Instance Learning (S-MIL), a framework that directly maps instance embeddings to bag predictions, alleviating gradient vanishing issues found in traditional MIL. They also design spatial-temporal encoded instances to model intra-frame and inter-frame inconsistencies and introduce a new dataset, FFPMS, for partial DeepFake video detection.

Abstract

With the rapid development of facial manipulation techniques, face forgery has received considerable attention in multimedia and computer vision community due to security concerns. Existing methods are mostly designed for single-frame detection trained with precise image-level labels or for video-level prediction by only modeling the inter-frame inconsistency, leaving potential high risks for DeepFake attackers. In this paper, we introduce a new problem of partial face attack in DeepFake video, where only video-level labels are provided but not all the faces in the fake videos are manipulated. We address this problem by multiple instance learning framework, treating faces and input video as instances and bag respectively. A sharp MIL (S-MIL) is proposed which builds direct mapping from instance embeddings to bag prediction, rather than from instance embeddings to instance prediction and then to bag prediction in traditional MIL. Theoretical analysis proves that the gradient vanishing in traditional MIL is relieved in S-MIL. To generate instances that can accurately incorporate the partially manipulated faces, spatial-temporal encoded instance is designed to fully model the intra-frame and inter-frame inconsistency, which further helps to promote the detection performance. We also construct a new dataset FFPMS for partially attacked DeepFake video detection, which can benefit the evaluation of different methods at both frame and video levels. Experiments on FFPMS and the widely used DFDC dataset verify that S-MIL is superior to other counterparts for partially attacked DeepFake video detection. In addition, S-MIL can also be adapted to traditional DeepFake image detection tasks and achieve state-of-the-art performance on single-frame datasets.


Key findings
S-MIL consistently outperforms both frame-based and video-based counterparts for partially attacked DeepFake video detection on FFPMS and DFDC datasets. It demonstrates robust performance even with a low rate of fake faces and achieves state-of-the-art results in traditional single-frame detection tasks, highlighting its generalization ability and the effectiveness of its weighting mechanism and spatial-temporal instances.
Approach
The proposed Sharp Multiple Instance Learning (S-MIL) treats faces as instances and the video as a bag, building a direct mapping from instance embeddings to bag prediction to alleviate gradient vanishing. It incorporates spatial-temporal encoded instances, using 1-D CNNs with multiple temporal kernels (k=1, 2, 3) to capture both intra-frame and inter-frame inconsistencies for improved detection.
Datasets
FaceForensics++ (FF++), Celeb-DF, Deepfake Detection Challenge (DFDC), FaceForensics Plus with Mixing samples (FFPMS)
Model(s)
XceptionNet (as backbone), 1-D CNNs for temporal encoding
Author countries
China