UMCL: Unimodal-generated Multimodal Contrastive Learning for Cross-compression-rate Deepfake Detection

Authors: Ching-Yi Lai, Chih-Yu Jian, Pei-Cheng Chuang, Chia-Ming Lee, Chih-Chung Hsu, Chiou-Ting Hsu, Chia-Wen Lin

Published: 2025-11-24 10:56:22+00:00

AI Summary

The UMCL framework is introduced to achieve robust cross-compression-rate (CCR) deepfake detection by generating multimodal features (rPPG, landmarks, semantic embeddings) from a single visual input. These features are aligned using Affinity-driven Semantic Alignment (ASA) to model inter-modal relationships, and feature consistency across quality levels is ensured via Cross-Quality Similarity Learning (CQSL). The method demonstrates superior performance and resilience across various compression rates and manipulation types, establishing a new benchmark for robust deepfake detection.

Abstract

In deepfake detection, the varying degrees of compression employed by social media platforms pose significant challenges for model generalization and reliability. Although existing methods have progressed from single-modal to multimodal approaches, they face critical limitations: single-modal methods struggle with feature degradation under data compression in social media streaming, while multimodal approaches require expensive data collection and labeling and suffer from inconsistent modal quality or accessibility in real-world scenarios. To address these challenges, we propose a novel Unimodal-generated Multimodal Contrastive Learning (UMCL) framework for robust cross-compression-rate (CCR) deepfake detection. In the training stage, our approach transforms a single visual modality into three complementary features: compression-robust rPPG signals, temporal landmark dynamics, and semantic embeddings from pre-trained vision-language models. These features are explicitly aligned through an affinity-driven semantic alignment (ASA) strategy, which models inter-modal relationships through affinity matrices and optimizes their consistency through contrastive learning. Subsequently, our cross-quality similarity learning (CQSL) strategy enhances feature robustness across compression rates. Extensive experiments demonstrate that our method achieves superior performance across various compression rates and manipulation types, establishing a new benchmark for robust deepfake detection. Notably, our approach maintains high detection accuracy even when individual features degrade, while providing interpretable insights into feature relationships through explicit alignment.


Key findings
UMCL achieved superior results in cross-compression-rate, cross-dataset, and cross-manipulation evaluations, notably establishing a new benchmark for robust deepfake detection under high compression. The framework showed exceptional robustness against modality degradation (e.g., rPPG sampling and adversarial text prompts), achieving performance improvements exceeding 30% AUC over the prior baseline (CPML) in severely degraded conditions. These improvements are primarily attributed to the synergy of ASA for explicit feature alignment and CQSL for cross-quality consistency.
Approach
UMCL transforms a single visual input into three complementary feature modalities: rPPG signals, temporal landmark dynamics, and semantic embeddings derived from text prompts. These features are aligned using Affinity-driven Semantic Alignment (ASA), which employs affinity matrices and contrastive learning to ensure semantic consistency and prevent reliance on single modalities. Feature robustness across different compression levels is further enhanced via Cross-Quality Similarity Learning (CQSL) applied primarily to rPPG signals.
Datasets
FaceForensics++ (FF++), Celeb-DF, DFD, DFDC
Model(s)
PhysFormer (P-encoder), LRNet (L-encoder), CLIP ViT-B16 (T-encoder), MTCNN
Author countries
Taiwan