Scalable Face Security Vision Foundation Model for Deepfake, Diffusion, and Spoofing Detection
Authors: Gaojian Wang, Feng Lin, Tong Wu, Zhisheng Yan, Kui Ren
Published: 2025-10-12 15:38:03+00:00
AI Summary
The FS-VFM framework is proposed as a scalable self-supervised Vision Foundation Model to learn robust, fundamental representations of real faces for generalized security tasks (DFD, FAS, DiFF). It uses three synergistic objectives (3C) combining Masked Image Modeling (MIM) with CRFR-P masking and Instance Discrimination (ID) to encode local and global facial semantics. A lightweight FS-Adapter is also introduced for ultra-efficient transfer learning on downstream tasks.
Abstract
With abundant, unlabeled real faces, how can we learn robust and transferable facial representations to boost generalization across various face security tasks? We make the first attempt and propose FS-VFM, a scalable self-supervised pre-training framework, to learn fundamental representations of real face images. We introduce three learning objectives, namely 3C, that synergize masked image modeling (MIM) and instance discrimination (ID), empowering FS-VFM to encode both local patterns and global semantics of real faces. Specifically, we formulate various facial masking strategies for MIM and devise a simple yet effective CRFR-P masking, which explicitly prompts the model to pursue meaningful intra-region Consistency and challenging inter-region Coherency. We present a reliable self-distillation mechanism that seamlessly couples MIM with ID to establish underlying local-to-global Correspondence. After pre-training, vanilla vision transformers (ViTs) serve as universal Vision Foundation Models for downstream Face Security tasks: cross-dataset deepfake detection, cross-domain face anti-spoofing, and unseen diffusion facial forensics. To efficiently transfer the pre-trained FS-VFM, we further propose FS-Adapter, a lightweight plug-and-play bottleneck atop the frozen backbone with a novel real-anchor contrastive objective. Extensive experiments on 11 public benchmarks demonstrate that our FS-VFM consistently generalizes better than diverse VFMs, spanning natural and facial domains, fully, weakly, and self-supervised paradigms, small, base, and large ViT scales, and even outperforms SOTA task-specific methods, while FS-Adapter offers an excellent efficiency-performance trade-off. The code and models are available on https://fsfm-3c.github.io/fsvfm.html.