Score: 1

Efficient Solutions for Mitigating Initialization Bias in Unsupervised Self-Adaptive Auditory Attention Decoding

Published: September 18, 2025 | arXiv ID: 2509.14764v1

By: Yuanyuan Yao , Simon Geirnaert , Tinne Tuytelaars and more

Potential Business Impact:

Helps hearing aids focus on one voice.

Business Areas:
Autonomous Vehicles Transportation

Decoding the attended speaker in a multi-speaker environment from electroencephalography (EEG) has attracted growing interest in recent years, with neuro-steered hearing devices as a driver application. Current approaches typically rely on ground-truth labels of the attended speaker during training, necessitating calibration sessions for each user and each EEG set-up to achieve optimal performance. While unsupervised self-adaptive auditory attention decoding (AAD) for stimulus reconstruction has been developed to eliminate the need for labeled data, it suffers from an initialization bias that can compromise performance. Although an unbiased variant has been proposed to address this limitation, it introduces substantial computational complexity that scales with data size. This paper presents three computationally efficient alternatives that achieve comparable performance, but with a significantly lower and constant computational cost. The code for the proposed algorithms is available at https://github.com/YYao-42/Unsupervised_AAD.

Repos / Data Links

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Signal Processing