Scalable Face Security Vision Foundation Model for Deepfake, Diffusion, and Spoofing Detection

Authors: Gaojian Wang, Feng Lin, Tong Wu, Zhisheng Yan, Kui Ren

Published: 2025-10-12 15:38:03+00:00

AI Summary

The FS-VFM framework is proposed as a scalable self-supervised Vision Foundation Model to learn robust, fundamental representations of real faces for generalized security tasks (DFD, FAS, DiFF). It uses three synergistic objectives (3C) combining Masked Image Modeling (MIM) with CRFR-P masking and Instance Discrimination (ID) to encode local and global facial semantics. A lightweight FS-Adapter is also introduced for ultra-efficient transfer learning on downstream tasks.

Abstract

With abundant, unlabeled real faces, how can we learn robust and transferable facial representations to boost generalization across various face security tasks? We make the first attempt and propose FS-VFM, a scalable self-supervised pre-training framework, to learn fundamental representations of real face images. We introduce three learning objectives, namely 3C, that synergize masked image modeling (MIM) and instance discrimination (ID), empowering FS-VFM to encode both local patterns and global semantics of real faces. Specifically, we formulate various facial masking strategies for MIM and devise a simple yet effective CRFR-P masking, which explicitly prompts the model to pursue meaningful intra-region Consistency and challenging inter-region Coherency. We present a reliable self-distillation mechanism that seamlessly couples MIM with ID to establish underlying local-to-global Correspondence. After pre-training, vanilla vision transformers (ViTs) serve as universal Vision Foundation Models for downstream Face Security tasks: cross-dataset deepfake detection, cross-domain face anti-spoofing, and unseen diffusion facial forensics. To efficiently transfer the pre-trained FS-VFM, we further propose FS-Adapter, a lightweight plug-and-play bottleneck atop the frozen backbone with a novel real-anchor contrastive objective. Extensive experiments on 11 public benchmarks demonstrate that our FS-VFM consistently generalizes better than diverse VFMs, spanning natural and facial domains, fully, weakly, and self-supervised paradigms, small, base, and large ViT scales, and even outperforms SOTA task-specific methods, while FS-Adapter offers an excellent efficiency-performance trade-off. The code and models are available on https://fsfm-3c.github.io/fsvfm.html.


Key findings
FS-VFM consistently outperforms diverse natural and facial Vision Foundation Models across all 11 face security benchmarks, establishing a new generalization baseline via simple fine-tuning. Scaling up model capacity and pre-training data systematically improved robustness, especially on unseen diffusion-generated faces (DiFF) where FS-VFM achieved superior results. The lightweight FS-Adapter enabled highly efficient tuning, retaining strong generalization while training only a small fraction (<0.2%) of the large ViT backbone parameters.
Approach
The approach utilizes self-supervised pre-training on unlabeled real faces via the FS-VFM framework, designed around three objectives (3C). These objectives synergize Masked Image Modeling (MIM), guided by the novel CRFR-P masking strategy, and Instance Discrimination (ID) via self-distillation to enforce intra-region Consistency, inter-region Coherency, and local-to-global Correspondence. This process trains vanilla Vision Transformers (ViTs) to model facial "realness" for superior generalization.
Datasets
VGGFace2 (VF2) for pre-training. Evaluated on cross-dataset Deepfake Detection (FF++, CDFv2, DFDCp, DFDC, WDF, CDF++), cross-domain Face Anti-Spoofing (MSU-MFSD, CASIA-FASD, Idiap Replay-Attack, OULU-NPU), and Diffusion Facial Forensics (DiFF).
Model(s)
Vision Transformers (ViT-{S, B, L}/16), used within a Masked Autoencoder (MAE) framework for MIM, coupled with a Siamese network design for Instance Discrimination (ID), and enhanced by the plug-and-play FS-Adapter for efficient tuning.
Author countries
China, USA