Local-Global Feature Fusion for Subject-Independent EEG Emotion Recognition
By: Zheng Zhou, Isabella McEvoy, Camilo E. Valderrama
Subject-independent EEG emotion recognition is challenged by pronounced inter-subject variability and the difficulty of learning robust representations from short, noisy recordings. To address this, we propose a fusion framework that integrates (i) local, channel-wise descriptors and (ii) global, trial-level descriptors, improving cross-subject generalization on the SEED-VII dataset. Local representations are formed per channel by concatenating differential entropy with graph-theoretic features, while global representations summarize time-domain, spectral, and complexity characteristics at the trial level. These representations are fused in a dual-branch transformer with attention-based fusion and domain-adversarial regularization, with samples filtered by an intensity threshold. Experiments under a leave-one-subject-out protocol demonstrate that the proposed method consistently outperforms single-view and classical baselines, achieving approximately 40% mean accuracy in 7-class subject-independent emotion recognition. The code has been released at https://github.com/Danielz-z/LGF-EEG-Emotion.
Similar Papers
EEG-based Graph-guided Domain Adaptation for Robust Cross-Session Emotion Recognition
Machine Learning (CS)
Helps computers understand your feelings from brain waves.
Cross-domain EEG-based Emotion Recognition with Contrastive Learning
CV and Pattern Recognition
Reads your feelings from brain waves.
A Unified Framework for Emotion Recognition and Sentiment Analysis via Expert-Guided Multimodal Fusion with Large Language Models
Computation and Language
**Computers understand feelings from talking, seeing, and writing.**