Efficient Solutions for Mitigating Initialization Bias in Unsupervised Self-Adaptive Auditory Attention Decoding
By: Yuanyuan Yao , Simon Geirnaert , Tinne Tuytelaars and more
Potential Business Impact:
Helps hearing aids focus on one voice.
Decoding the attended speaker in a multi-speaker environment from electroencephalography (EEG) has attracted growing interest in recent years, with neuro-steered hearing devices as a driver application. Current approaches typically rely on ground-truth labels of the attended speaker during training, necessitating calibration sessions for each user and each EEG set-up to achieve optimal performance. While unsupervised self-adaptive auditory attention decoding (AAD) for stimulus reconstruction has been developed to eliminate the need for labeled data, it suffers from an initialization bias that can compromise performance. Although an unbiased variant has been proposed to address this limitation, it introduces substantial computational complexity that scales with data size. This paper presents three computationally efficient alternatives that achieve comparable performance, but with a significantly lower and constant computational cost. The code for the proposed algorithms is available at https://github.com/YYao-42/Unsupervised_AAD.
Similar Papers
Unsupervised EEG-based decoding of absolute auditory attention with canonical correlation analysis
Signal Processing
Lets computers tell if you're really listening.
Frequency-Based Alignment of EEG and Audio Signals Using Contrastive Learning and SincNet for Auditory Attention Detection
Signal Processing
Helps hearing aids know who you're listening to.
A Robust Multi-Scale Framework with Test-Time Adaptation for sEEG-Based Speech Decoding
Human-Computer Interaction
Lets paralyzed people talk by reading brain waves.