BEAT2AASIST model with layer fusion for ESDD 2026 Challenge
Authors: Sanghyeok Chung, Eujin Kim, Donggun Kim, Gaeun Heo, Jeongbin You, Nahyun Lee, Sunmook Choi, Soyul Han, Seungsang Oh, Il-Youp Kwak
Published: 2025-12-17 08:24:12+00:00
Comment: 3 pages, 1 figure, challenge paper
AI Summary
This paper introduces BEAT2AASIST, an extension of the BEATs-AASIST model, for Environmental Sound Deepfake Detection (ESDD) within the ESDD 2026 Challenge. The proposed model enhances feature representations by splitting BEATs-derived features for processing by dual AASIST branches and incorporates top-k transformer layer fusion strategies. Additionally, vocoder-based data augmentation is utilized to improve robustness against unseen spoofing methods.
Abstract
Recent advances in audio generation have increased the risk of realistic environmental sound manipulation, motivating the ESDD 2026 Challenge as the first large-scale benchmark for Environmental Sound Deepfake Detection (ESDD). We propose BEAT2AASIST which extends BEATs-AASIST by splitting BEATs-derived representations along frequency or channel dimension and processing them with dual AASIST branches. To enrich feature representations, we incorporate top-k transformer layer fusion using concatenation, CNN-gated, and SE-gated strategies. In addition, vocoder-based data augmentation is applied to improve robustness against unseen spoofing methods. Experimental results on the official test sets demonstrate that the proposed approach achieves competitive performance across the challenge tracks.