Score: 2

A Robust Multi-Scale Framework with Test-Time Adaptation for sEEG-Based Speech Decoding

Published: September 29, 2025 | arXiv ID: 2509.24700v1

By: Suli Wang , Yang-yang Li , Siqi Cai and more

Potential Business Impact:

Lets paralyzed people talk by reading brain waves.

Business Areas:
Speech Recognition Data and Analytics, Software

Decoding speech from stereo-electroencephalography (sEEG) signals has emerged as a promising direction for brain-computer interfaces (BCIs). Its clinical applicability, however, is limited by the inherent non-stationarity of neural signals, which causes domain shifts between training and testing, undermining decoding reliability. To address this challenge, a two-stage framework is proposed for enhanced robustness. First, a multi-scale decomposable mixing (MDM) module is introduced to model the hierarchical temporal dynamics of speech production, learning stable multi-timescale representations from sEEG signals. Second, a source-free online test-time adaptation (TTA) method performs entropy minimization to adapt the model to distribution shifts during inference. Evaluations on the public DU-IN spoken word decoding benchmark show that the approach outperforms state-of-the-art models, particularly in challenging cases. This study demonstrates that combining invariant feature learning with online adaptation is a principled strategy for developing reliable BCI systems. Our code is available at https://github.com/lyyi599/MDM-TENT.

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Human-Computer Interaction