Open-World Deepfake Attribution via Confidence-Aware Asymmetric Learning

Authors: Haiyang Zheng, Nan Pu, Wenjing Li, Teng Long, Nicu Sebe, Zhun Zhong

Published: 2025-12-14 12:31:28+00:00

AI Summary

This paper proposes the Confidence-Aware Asymmetric Learning (CAL) framework for Open-World DeepFake Attribution (OW-DFA), aiming to attribute both known and novel forgery methods. CAL addresses key limitations in existing OW-DFA methods, specifically the confidence skew that results in biased pseudo-labels for novel forgeries. The framework also introduces a Dynamic Prototype Pruning (DPP) strategy to automatically estimate the unknown number of forgery categories.

Abstract

The proliferation of synthetic facial imagery has intensified the need for robust Open-World DeepFake Attribution (OW-DFA), which aims to attribute both known and unknown forgeries using labeled data for known types and unlabeled data containing a mixture of known and novel types. However, existing OW-DFA methods face two critical limitations: 1) A confidence skew that leads to unreliable pseudo-labels for novel forgeries, resulting in biased training. 2) An unrealistic assumption that the number of unknown forgery types is known *a priori*. To address these challenges, we propose a Confidence-Aware Asymmetric Learning (CAL) framework, which adaptively balances model confidence across known and novel forgery types. CAL mainly consists of two components: Confidence-Aware Consistency Regularization (CCR) and Asymmetric Confidence Reinforcement (ACR). CCR mitigates pseudo-label bias by dynamically scaling sample losses based on normalized confidence, gradually shifting the training focus from high- to low-confidence samples. ACR complements this by separately calibrating confidence for known and novel classes through selective learning on high-confidence samples, guided by their confidence gap. Together, CCR and ACR form a mutually reinforcing loop that significantly improves the model's OW-DFA performance. Moreover, we introduce a Dynamic Prototype Pruning (DPP) strategy that automatically estimates the number of novel forgery types in a coarse-to-fine manner, removing the need for unrealistic prior assumptions and enhancing the scalability of our methods to real-world OW-DFA scenarios. Extensive experiments on the standard OW-DFA benchmark and a newly extended benchmark incorporating advanced manipulations demonstrate that CAL consistently outperforms previous methods, achieving new state-of-the-art performance on both known and novel forgery attribution.


Key findings
CAL consistently achieves state-of-the-art performance, outperforming previous methods like CPL by an average of 4.5% in All Accuracy and a significant 7.3% in Novel Accuracy on the OW-DFA-40 benchmark. The DPP strategy successfully estimates the number of unknown forgery types with minimal estimation error, enhancing the framework's scalability to real-world open-world scenarios.
Approach
CAL utilizes Confidence-Aware Consistency Regularization (CCR) to adaptively scale sample losses based on normalized confidence, mitigating pseudo-label bias, and Asymmetric Confidence Reinforcement (ACR) for confidence calibration and selective learning on high-confidence samples. It integrates a Frequency-Guided Feature Enhancement (FFE) module and employs Dynamic Prototype Pruning (DPP), a coarse-to-fine mechanism, to estimate the number of novel forgery types dynamically during training.
Datasets
OW-DFA benchmark, OW-DFA-40 benchmark (constructed using FaceForensics++ and Celeb-DF).
Model(s)
ResNet-50 (ImageNet pre-trained) as the image encoder, combined with a prototypical classifier and a lightweight convolutional network for frequency feature enhancement.
Author countries
Italy, China