BEAT2AASIST model with layer fusion for ESDD 2026 Challenge
By: Sanghyeok Chung , Eujin Kim , Donggun Kim and more
Recent advances in audio generation have increased the risk of realistic environmental sound manipulation, motivating the ESDD 2026 Challenge as the first large-scale benchmark for Environmental Sound Deepfake Detection (ESDD). We propose BEAT2AASIST which extends BEATs-AASIST by splitting BEATs-derived representations along frequency or channel dimension and processing them with dual AASIST branches. To enrich feature representations, we incorporate top-k transformer layer fusion using concatenation, CNN-gated, and SE-gated strategies. In addition, vocoder-based data augmentation is applied to improve robustness against unseen spoofing methods. Experimental results on the official test sets demonstrate that the proposed approach achieves competitive performance across the challenge tracks.
Similar Papers
Technical Report of Nomi Team in the Environmental Sound Deepfake Detection Challenge 2026
Sound
Detects fake sounds to keep audio real.
ESDD 2026: Environmental Sound Deepfake Detection Challenge Evaluation Plan
Sound
Detects fake sounds in videos and games.
Improving Perceptual Audio Aesthetic Assessment via Triplet Loss and Self-Supervised Embeddings
Audio and Speech Processing
Rates how good computer-made sounds are.