Addressing Gradient Misalignment in Data-Augmented Training for Robust Speech Deepfake Detection

Authors: Duc-Tuan Truong, Tianchi Liu, Junjie Li, Ruijie Tao, Kong Aik Lee, Eng Siong Chng

Published: 2025-09-25 02:31:54+00:00

Comment: 5 pages, 4 figures

AI Summary

This paper addresses gradient misalignment in data-augmented training for speech deepfake detection (SDD), where conflicting gradients from original and augmented inputs can hinder optimization. The authors propose a dual-path data-augmented (DPDA) training framework with gradient alignment, processing original and augmented speech in parallel to compare and align their backpropagated gradients. This approach resolves conflicts, accelerates convergence, and achieves significant Equal Error Rate reductions compared to baselines.

Abstract

In speech deepfake detection (SDD), data augmentation (DA) is commonly used to improve model generalization across varied speech conditions and spoofing attacks. However, during training, the backpropagated gradients from original and augmented inputs may misalign, which can result in conflicting parameter updates. These conflicts could hinder convergence and push the model toward suboptimal solutions, thereby reducing the benefits of DA. To investigate and address this issue, we design a dual-path data-augmented (DPDA) training framework with gradient alignment for SDD. In our framework, each training utterance is processed through two input paths: one using the original speech and the other with its augmented version. This design allows us to compare and align their backpropagated gradient directions to reduce optimization conflicts. Our analysis shows that approximately 25% of training iterations exhibit gradient conflicts between the original inputs and their augmented counterparts when using RawBoost augmentation. By resolving these conflicts with gradient alignment, our method accelerates convergence by reducing the number of training epochs and achieves up to an 18.69% relative reduction in Equal Error Rate on the In-the-Wild dataset compared to the baseline.


Key findings
Gradient conflicts between original and augmented inputs occur frequently (approx. 25% of iterations with RawBoost) and are linked to distinct loss landscapes. The proposed gradient alignment method, particularly PCGrad, consistently improves performance across various model architectures and augmentation strategies. It reduces training epochs by accelerating convergence and achieves up to an 18.69% relative reduction in Equal Error Rate on the In-the-Wild dataset compared to the baseline.
Approach
The authors propose a dual-path data-augmented (DPDA) training framework. Each training utterance is processed through two input paths: one with the original speech and another with its augmented version. Gradient alignment methods (PCGrad, GradVac, CAGrad) are then applied to compare and align the backpropagated gradients from these two paths, reducing optimization conflicts.
Datasets
ASVspoof2019 Logical Access (LA) (training/validation), ASVspoof2021 DF (21DF), In-the-Wild (ITW), Fake-or-Real (FoR) norm-test subset (evaluation).
Model(s)
XLSR-AASIST, XLSR-TCM-Conformer, XLSR-Mamba.
Author countries
Singapore, Hong Kong