Not All Deepfakes Are Created Equal: Triaging Audio Forgeries for Robust Deepfake Singer Identification
Authors: Davide Salvi, Hendrik Vincent Koops, Elio Quinton
Published: 2025-10-20 12:16:52+00:00
AI Summary
The paper addresses the challenge of identifying singers in highly realistic vocal deepfakes by introducing a novel two-stage pipeline that triages audio forgeries based on quality. This system first uses a discriminator to filter out low-quality deepfakes that fail to reproduce vocal likeness, prioritizing the most harmful, high-quality fakes. Experiments demonstrate that this triage approach significantly improves robust singer identification performance across both authentic and synthetic content compared to traditional baselines.
Abstract
The proliferation of highly realistic singing voice deepfakes presents a significant challenge to protecting artist likeness and content authenticity. Automatic singer identification in vocal deepfakes is a promising avenue for artists and rights holders to defend against unauthorized use of their voice, but remains an open research problem. Based on the premise that the most harmful deepfakes are those of the highest quality, we introduce a two-stage pipeline to identify a singer's vocal likeness. It first employs a discriminator model to filter out low-quality forgeries that fail to accurately reproduce vocal likeness. A subsequent model, trained exclusively on authentic recordings, identifies the singer in the remaining high-quality deepfakes and authentic audio. Experiments show that this system consistently outperforms existing baselines on both authentic and synthetic content.