LAA-X: Unified Localized Artifact Attention for Quality-Agnostic and Generalizable Face Forgery Detection
Authors: Dat Nguyen, Enjie Ghorbel, Anis Kacem, Marcella Astrid, Djamila Aouada
Published: 2026-04-05 12:08:48+00:00
Comment: Journal version of LAA-Net (CVPR 2024)
AI Summary
The paper introduces Localized Artifact Attention X (LAA-X), a novel deepfake detection framework designed for robustness against high-quality forgeries and generalization to unseen manipulations. It employs an explicit attention strategy through a multi-task learning framework combined with blending-based data synthesis to guide the model toward localized, artifact-prone regions. LAA-X is compatible with both CNN (LAA-Net) and transformer (LAA-Former/LAA-Swin) backbones, achieving state-of-the-art performance across multiple benchmarks despite being trained only on real and pseudo-fake samples.
Abstract
In this paper, we propose Localized Artifact Attention X (LAA-X), a novel deepfake detection framework that is both robust to high-quality forgeries and capable of generalizing to unseen manipulations. Existing approaches typically rely on binary classifiers coupled with implicit attention mechanisms, which often fail to generalize beyond known manipulations. In contrast, LAA-X introduces an explicit attention strategy based on a multi-task learning framework combined with blending-based data synthesis. Auxiliary tasks are designed to guide the model toward localized, artifact-prone (i.e., vulnerable) regions. The proposed framework is compatible with both CNN and transformer backbones, resulting in two different versions, namely, LAA-Net and LAA-Former, respectively. Despite being trained only on real and pseudo-fake samples, LAA-X competes with state-of-the-art methods across multiple benchmarks. Code and pre-trained weights for LAA-Net\\footnote{https://github.com/10Ring/LAA-Net} and LAA-Former\\footnote{https://github.com/10Ring/LAA-Former} are publicly available.