SFANet: Spatial-Frequency Attention Network for Deepfake Detection
Authors: Vrushank Ahire, Aniruddh Muley, Shivam Zample, Siddharth Verma, Pranav Menon, Surbhi Madan, Abhinav Dhall
Published: 2025-10-06 09:35:57+00:00
AI Summary
The paper introduces SFANet, a novel ensemble framework for robust deepfake detection that addresses generalization issues across diverse generation techniques. This framework combines transformer-based architectures (like Swin and ViT) with texture-based methods, utilizing innovative techniques such as sequential training, frequency splitting, and face segmentation. The resulting model achieved state-of-the-art performance on the diverse DFWild-Cup dataset.
Abstract
Detecting manipulated media has now become a pressing issue with the recent rise of deepfakes. Most existing approaches fail to generalize across diverse datasets and generation techniques. We thus propose a novel ensemble framework, combining the strengths of transformer-based architectures, such as Swin Transformers and ViTs, and texture-based methods, to achieve better detection accuracy and robustness. Our method introduces innovative data-splitting, sequential training, frequency splitting, patch-based attention, and face segmentation techniques to handle dataset imbalances, enhance high-impact regions (e.g., eyes and mouth), and improve generalization. Our model achieves state-of-the-art performance when tested on the DFWild-Cup dataset, a diverse subset of eight deepfake datasets. The ensemble benefits from the complementarity of these approaches, with transformers excelling in global feature extraction and texturebased methods providing interpretability. This work demonstrates that hybrid models can effectively address the evolving challenges of deepfake detection, offering a robust solution for real-world applications.