Disentangling Speaker Traits for Deepfake Source Verification via Chebyshev Polynomial and Riemannian Metric Learning

Authors: Xi Xuan, Wenxin Zhang, Zhiyu Li, Jennifer Williams, Ville Hautamäki, Tomi H. Kinnunen

Published: 2026-03-23 12:05:57+00:00

Comment: Submitted to Interspeech 2026; The code, evaluation protocols and demo website are available at https://github.com/xxuan-acoustics/RiemannSD-Net

AI Summary

The paper addresses the challenge of speaker trait entanglement in speech deepfake source verification by proposing a speaker-disentangled metric learning (SDML) framework. This framework incorporates two novel loss functions: ChebySD-AAM, which uses Chebyshev polynomial to stabilize disentanglement optimization, and RiemannSD-AAM, which leverages Riemannian metric learning in hyperbolic space to reduce speaker information and learn more discriminative source features. The goal is to enhance the robustness and accuracy of identifying the source generator, independent of speaker characteristics.

Abstract

Speech deepfake source verification systems aims to determine whether two synthetic speech utterances originate from the same source generator, often assuming that the resulting source embeddings are independent of speaker traits. However, this assumption remains unverified. In this paper, we first investigate the impact of speaker factors on source verification. We propose a speaker-disentangled metric learning (SDML) framework incorporating two novel loss functions. The first leverages Chebyshev polynomial to mitigate gradient instability during disentanglement optimization. The second projects source and speaker embeddings into hyperbolic space, leveraging Riemannian metric distances to reduce speaker information and learn more discriminative source features. Experimental results on MLAAD benchmark, evaluated under four newly proposed protocols designed for source-speaker disentanglement scenarios, demonstrate the effectiveness of SDML framework. The code, evaluation protocols and demo website are available at https://github.com/xxuan-acoustics/RiemannSD-Net.


Key findings
The proposed ChebySD-AAM and RiemannSD-AAM loss functions consistently outperformed the AAM-Softmax baseline across four newly designed evaluation protocols for source-speaker disentanglement. RiemannSD-AAM, particularly when combined with the ResNet34 encoder, achieved the best overall results, demonstrating superior robustness and generalization by effectively modeling hierarchical structures in hyperbolic space. Ablation studies confirmed that speaker disentanglement is crucial for performance, especially in scenarios involving unseen sources.
Approach
The SDML framework employs a dual-branch architecture, with a trainable source encoder for deepfake source embeddings and a frozen speaker verification model for speaker embeddings. It introduces ChebySD-AAM to mitigate gradient instability and penalize speaker alignment using an adaptive margin, and RiemannSD-AAM to project embeddings into hyperbolic space, utilizing Riemannian distances to minimize speaker information leakage and learn robust source features.
Datasets
MLAAD v8, MUSAN, RIRs
Model(s)
ECAPA-TDNN, ResNet34, AASIST, Mamba
Author countries
Finland, Hong Kong, China, United Kingdom