Curved Worlds, Clear Boundaries: Generalizing Speech Deepfake Detection using Hyperbolic and Spherical Geometry Spaces

Authors: Farhan Sheth, Girish, Mohd Mujtaba Akhtar, Muskaan Singh

Published: 2025-11-13 20:43:31+00:00

AI Summary

This paper introduces RHYME, a unified framework for generalizable audio deepfake detection (ADD) across diverse synthesis paradigms including conventional TTS and modern diffusion/flow-matching generators. RHYME achieves synthesis-invariant alignment by fusing utterance-level embeddings from diverse pretrained speech encoders using non-Euclidean projections. By mapping representations into complementary hyperbolic and spherical manifolds, the framework captures hierarchical generator families and periodic spectral artifacts, leading to improved cross-paradigm generalization.

Abstract

In this work, we address the challenge of generalizable audio deepfake detection (ADD) across diverse speech synthesis paradigms-including conventional text-to-speech (TTS) systems and modern diffusion or flow-matching (FM) based generators. Prior work has mostly targeted individual synthesis families and often fails to generalize across paradigms due to overfitting to generation-specific artifacts. We hypothesize that synthetic speech, irrespective of its generative origin, leaves behind shared structural distortions in the embedding space that can be aligned through geometry-aware modeling. To this end, we propose RHYME, a unified detection framework that fuses utterance-level embeddings from diverse pretrained speech encoders using non-Euclidean projections. RHYME maps representations into hyperbolic and spherical manifolds-where hyperbolic geometry excels at modeling hierarchical generator families, and spherical projections capture angular, energy-invariant cues such as periodic vocoder artifacts. The fused representation is obtained via Riemannian barycentric averaging, enabling synthesis-invariant alignment. RHYME outperforms individual PTMs and homogeneous fusion baselines, achieving top performance and setting new state-of-the-art in cross-paradigm ADD.


Key findings
RHYME consistently outperformed individual PTM baselines and homogeneous fusion baselines, setting new state-of-the-art performance in cross-paradigm ADD. The framework, particularly when using USAD as a backbone, achieved substantially lower EERs (e.g., 14.12% TR-A → TE-D) compared to existing state-of-the-art end-to-end models like AASIST-L (32.44%). Ablation studies confirmed that both the geometric branches and the Riemannian fusion mechanism are crucial for achieving robust generalization under unseen conditions.
Approach
RHYME extracts and splits embeddings from frozen pretrained speech encoders (PTMs) into two components, projecting them into hyperbolic space (to model hierarchical generative traces) and spherical space (to model angular periodic artifacts). These projections are then fused using Riemannian barycentric averaging in the Poincaré ball to obtain a synthesis-agnostic representation, which is classified by a lightweight linear layer.
Datasets
DFADD, ASVspoof 2019 (LA subset)
Model(s)
RHYME (Hyperbolic and Spherical Geometric Fusion); PTM backbones: USAD2, PaSST3, Whisper4, x-vector, WavLM, HuBERT, Wav2Vec 2.0
Author countries
India, UK