Representation Selective Self-distillation and wav2vec 2.0 Feature Exploration for Spoof-aware Speaker Verification

Authors: Jin Woo Lee, Eungbeom Kim, Junghyun Koo, Kyogu Lee

Published: 2022-04-06 07:47:36+00:00

Comment: Accepted to be published in the Proceedings of Interspeech 2022

AI Summary

This paper investigates the effectiveness of wav2vec 2.0 features for spoofing detection in automatic speaker verification (ASV) systems and proposes a Spoof-Aware Speaker Verification (SASV) method. The study analyzes which feature space within wav2vec 2.0, specifically from different Transformer layers of XLSR-53, is most advantageous for identifying synthetic speech artifacts. A novel Representation Selective Self-Distillation (RSSD) module is introduced to improve SASV by disentangling speaker and spoofing representations.

Abstract

Text-to-speech and voice conversion studies are constantly improving to the extent where they can produce synthetic speech almost indistinguishable from bona fide human speech. In this regard, the importance of countermeasures (CM) against synthetic voice attacks of the automatic speaker verification (ASV) systems emerges. Nonetheless, most end-to-end spoofing detection networks are black-box systems, and the answer to what is an effective representation for finding artifacts remains veiled. In this paper, we examine which feature space can effectively represent synthetic artifacts using wav2vec 2.0, and study which architecture can effectively utilize the space. Our study allows us to analyze which attribute of speech signals is advantageous for the CM systems. The proposed CM system achieved 0.31% equal error rate (EER) on ASVspoof 2019 LA evaluation set for the spoof detection task. We further propose a simple yet effective spoofing aware speaker verification (SASV) method, which takes advantage of the disentangled representations from our countermeasure system. Evaluation performed with the SASV Challenge 2022 database show 1.08% of SASV EER. Quantitative analysis shows that using the explored feature space of wav2vec 2.0 advantages both spoofing CM and SASV.


Key findings
The study found that features from the 5th layer of XLSR-53 (a wav2vec 2.0 model) were most effective for spoofing detection, outperforming existing sinc convolution-based front-ends. A simple Attentive Statistics Pooling (ASP) layer as the back-end, combined with XLSR-53 features, achieved the best countermeasure performance (0.31% EER on ASVspoof 2019 LA). The proposed Representation Selective Self-Distillation (RSSD) method leveraging these improved countermeasures achieved a state-of-the-art 1.08% SASV EER on the SASV Challenge 2022 database.
Approach
The authors explore features from different Transformer layers of a pre-trained wav2vec 2.0 (XLSR-53) model as a front-end for spoofing detection. They compare various lightweight back-end architectures like MLP and Attentive Statistics Pooling (ASP) with the AASIST baseline. For spoof-aware speaker verification, they propose Representation Selective Self-Distillation (RSSD) which uses disentangled representations from independently trained speaker verification (ECAPA-TDNN) and countermeasure networks.
Datasets
ASVspoof 2019 LA database, SASV Challenge 2022 database
Model(s)
wav2vec 2.0 (XLSR-53), AASIST, Multilayer Perceptron (MLP), Attentive Statistics Pooling (ASP), ECAPA-TDNN
Author countries
South Korea