A Backbone Benchmarking Study on Self-supervised Learning as a Auxiliary Task with Texture-based Local Descriptors for Face Analysis

Authors: Shukesh Reddy, Abhijit Das

Published: 2026-03-23 16:49:50+00:00

Comment: Accepted for publication in SN Computer Science

AI Summary

This study benchmarks various Vision Transformer backbones within a Self-Supervised Auxiliary Task (L-SSAT) framework to integrate texture-based local descriptors for robust face analysis. It investigates the impact of backbone choice on tasks like deepfake detection, attribute prediction, and emotion classification. The findings reveal that backbone effectiveness is task-dependent, with no single unified backbone performing optimally across all tasks.

Abstract

In this work, we benchmark with different backbones and study their impact for self-supervised learning (SSL) as an auxiliary task to blend texture-based local descriptors into feature modelling for efficient face analysis. It is established in previous work that combining a primary task and a self-supervised auxiliary task enables more robust and discriminative representation learning. We employed different shallow to deep backbones for the SSL task of Masked Auto-Encoder (MAE) as an auxiliary objective to reconstruct texture features such as local patterns alongside the primary task in local pattern SSAT (L-SSAT), ensuring robust and unbiased face analysis. To expand the benchmark, we conducted a comprehensive comparative analysis across multiple model configurations within the proposed framework. To this end, we address the three research questions: What is the role of the backbone in performance L-SSAT?, What type of backbone is effective for different face analysis tasks?, and Is there any generalized backbone for effective face analysis with L-SSAT?. Towards answering these questions, we provide a detailed study and experiments. The performance evaluation demonstrates that the backbone for the proposed method is highly dependent on the downstream task, achieving average accuracies of 0.94 on FaceForensics++, 0.87 on CelebA, and 0.88 on AffectNet. For consistency of feature representation quality and generalisation capability across various face analysis paradigms, including face attribute prediction, emotion classification, and deepfake detection, there is no unified backbone.


Key findings
The study found that backbone effectiveness within the L-SSAT framework is highly task-dependent, with no single unified backbone generalizing across face attribute prediction, emotion classification, and deepfake detection. Larger backbones (ViT-H) generally performed best for deepfake detection and multi-class emotion recognition, while moderate ones (ViT-B) showed stable generalization for attribute prediction.
Approach
The authors propose Local Pattern-SSAT (L-SSAT), a joint optimization framework that integrates a primary classification task with a self-supervised auxiliary task. This auxiliary task uses a Masked Auto-Encoder (MAE) to reconstruct texture features (Local Directional Pattern, LDP) from masked inputs, alongside a primary task on RGB inputs, to enhance robust face analysis across different Vision Transformer backbones.
Datasets
FaceForensics++, CelebA, AffectNet
Model(s)
Vision Transformers (ViT-B, ViT-L, ViT-H) integrated with a VideoMAE encoder for the Masked Auto-Encoder (MAE) based self-supervised auxiliary task.
Author countries
India