SONICS: Synthetic Or Not -- Identifying Counterfeit Songs

Authors: Md Awsafur Rahman, Zaber Ibn Abdul Hakim, Najibul Haque Sarker, Bishmoy Paul, Shaikh Anowarul Fattah

Published: 2024-08-26 08:02:57+00:00

Comment: Accepted to ICLR 2025. Project url: https://github.com/awsaf49/sonics

AI Summary

This paper introduces SONICS, a novel large-scale dataset for end-to-end synthetic song detection, addressing the limitations of existing datasets which primarily focus on singing voice deepfake detection. It also proposes SpecTTTra, an efficient Transformer-based architecture designed to effectively capture long-range temporal dependencies in songs for improved authenticity detection. SpecTTTra outperforms conventional CNN and Transformer models in both performance and computational efficiency.

Abstract

The recent surge in AI-generated songs presents exciting possibilities and challenges. These innovations necessitate the ability to distinguish between human-composed and synthetic songs to safeguard artistic integrity and protect human musical artistry. Existing research and datasets in fake song detection only focus on singing voice deepfake detection (SVDD), where the vocals are AI-generated but the instrumental music is sourced from real songs. However, these approaches are inadequate for detecting contemporary end-to-end artificial songs where all components (vocals, music, lyrics, and style) could be AI-generated. Additionally, existing datasets lack music-lyrics diversity, long-duration songs, and open-access fake songs. To address these gaps, we introduce SONICS, a novel dataset for end-to-end Synthetic Song Detection (SSD), comprising over 97k songs (4,751 hours) with over 49k synthetic songs from popular platforms like Suno and Udio. Furthermore, we highlight the importance of modeling long-range temporal dependencies in songs for effective authenticity detection, an aspect entirely overlooked in existing methods. To utilize long-range patterns, we introduce SpecTTTra, a novel architecture that significantly improves time and memory efficiency over conventional CNN and Transformer-based models. For long songs, our top-performing variant outperforms ViT by 8% in F1 score, is 38% faster, and uses 26% less memory, while also surpassing ConvNeXt with a 1% F1 score gain, 20% speed boost, and 67% memory reduction.


Key findings
The proposed SpecTTTra model significantly outperforms conventional models like ViT and ConvNeXt in F1 score (up to 8% and 1% respectively) for long songs, while also demonstrating substantial improvements in speed (up to 38% faster than ViT) and memory efficiency (up to 67% less than ConvNeXt). The research highlights the critical importance of modeling long-range temporal dependencies for effective synthetic song detection, and AI-based methods consistently outperform human evaluators.
Approach
The authors tackle end-to-end synthetic song detection by first creating SONICS, a large-scale dataset of over 97k songs (real and AI-generated by Suno and Udio) that specifically includes long-duration songs and diverse music-lyrics. They then propose SpecTTTra, a novel architecture utilizing a Spectro-Temporal Tokenizer with a ViT-like Transformer encoder, to efficiently process long audio sequences and model long-range temporal dependencies for classification.
Datasets
SONICS (proposed), SingFake, FSD, CtrSVDD (for comparison), YouTube (for real song sourcing), Genius Lyrics Dataset (for metadata), Suno, Udio (for synthetic song generation).
Model(s)
SpecTTTra (proposed), ConvNeXt, ViT, EfficientViT.
Author countries
USA, Bangladesh