MSGM: A Multi-Scale Spatiotemporal Graph Mamba for EEG Emotion Recognition
By: Hanwen Liu , Yifeng Gong , Zuwei Yan and more
Potential Business Impact:
Reads your feelings from brain waves faster.
EEG-based emotion recognition struggles with capturing multi-scale spatiotemporal dynamics and ensuring computational efficiency for real-time applications. Existing methods often oversimplify temporal granularity and spatial hierarchies, limiting accuracy. To overcome these challenges, we propose the Multi-Scale Spatiotemporal Graph Mamba (MSGM), a novel framework integrating multi-window temporal segmentation, bimodal spatial graph modeling, and efficient fusion via the Mamba architecture. By segmenting EEG signals across diverse temporal scales and constructing global-local graphs with neuroanatomical priors, MSGM effectively captures fine-grained emotional fluctuations and hierarchical brain connectivity. A multi-depth Graph Convolutional Network (GCN) and token embedding fusion module, paired with Mamba's state-space modeling, enable dynamic spatiotemporal interaction at linear complexity. Notably, with just one MSST-Mamba layer, MSGM surpasses leading methods in the field on the SEED, THU-EP, and FACED datasets, outperforming baselines in subject-independent emotion classification while achieving robust accuracy and millisecond-level inference on the NVIDIA Jetson Xavier NX.
Similar Papers
SAMBA: Toward a Long-Context EEG Foundation Model via Spatial Embedding and Differential Mamba
Machine Learning (CS)
Helps computers understand brain signals better.
MSV-Mamba: A Multiscale Vision Mamba Network for Echocardiography Segmentation
Image and Video Processing
Improves heart pictures for better health checks.
STM3: Mixture of Multiscale Mamba for Long-Term Spatio-Temporal Time-Series Prediction
Machine Learning (CS)
Predicts future events by seeing patterns in time.