BEAT2AASIST model with layer fusion for ESDD 2026 Challenge

Authors: Sanghyeok Chung, Eujin Kim, Donggun Kim, Gaeun Heo, Jeongbin You, Nahyun Lee, Sunmook Choi, Soyul Han, Seungsang Oh, Il-Youp Kwak

Published: 2025-12-17 08:24:12+00:00

Comment: 3 pages, 1 figure, challenge paper

AI Summary

This paper introduces BEAT2AASIST, an extension of the BEATs-AASIST model, for Environmental Sound Deepfake Detection (ESDD) within the ESDD 2026 Challenge. The proposed model enhances feature representations by splitting BEATs-derived features for processing by dual AASIST branches and incorporates top-k transformer layer fusion strategies. Additionally, vocoder-based data augmentation is utilized to improve robustness against unseen spoofing methods.

Abstract

Recent advances in audio generation have increased the risk of realistic environmental sound manipulation, motivating the ESDD 2026 Challenge as the first large-scale benchmark for Environmental Sound Deepfake Detection (ESDD). We propose BEAT2AASIST which extends BEATs-AASIST by splitting BEATs-derived representations along frequency or channel dimension and processing them with dual AASIST branches. To enrich feature representations, we incorporate top-k transformer layer fusion using concatenation, CNN-gated, and SE-gated strategies. In addition, vocoder-based data augmentation is applied to improve robustness against unseen spoofing methods. Experimental results on the official test sets demonstrate that the proposed approach achieves competitive performance across the challenge tracks.


Key findings
The BEAT2AASIST models, particularly their ensemble variants, achieve competitive performance across both Track 1 and Track 2 of the ESDD 2026 Challenge. The proposed approach secured a third-place ranking on Track 2, demonstrating strong performance in detecting environmental sound deepfakes, especially under challenging black-box conditions.
Approach
The approach extends the BEATs-AASIST baseline by splitting BEATs-derived representations along either the frequency or channel dimension, processing them with dual AASIST branches. It also incorporates multi-layer fusion using concatenation, CNN-gated, and SE-gated strategies on the top-k transformer layers to enrich feature representations. Vocoder-based data augmentation with HiFi-GAN, BigV-GAN, and Univnet is applied to improve robustness.
Datasets
EnvSDD dataset
Model(s)
BEATs (BEATs-iter3), AASIST, BEAT2AASIST, HiFi-GAN, BigV-GAN, Univnet
Author countries
South Korea, USA