An Emotion Recognition Framework via Cross-modal Alignment of EEG and Eye Movement Data
By: Jianlu Wang, Yanan Wang, Tong Liu
Potential Business Impact:
Reads your feelings using brain waves and eyes.
Emotion recognition is essential for applications in affective computing and behavioral prediction, but conventional systems relying on single-modality data often fail to capture the complexity of affective states. To address this limitation, we propose an emotion recognition framework that achieves accurate multimodal alignment of Electroencephalogram (EEG) and eye movement data through a hybrid architecture based on cross-modal attention mechanism. Experiments on the SEED-IV dataset demonstrate that our method achieve 90.62% accuracy. This work provides a promising foundation for leveraging multimodal data in emotion recognition
Similar Papers
CMCRD: Cross-Modal Contrastive Representation Distillation for Emotion Recognition
Human-Computer Interaction
Computers guess feelings better with less data.
ECMF: Enhanced Cross-Modal Fusion for Multimodal Emotion Recognition in MER-SEMI Challenge
CV and Pattern Recognition
Helps computers understand your feelings from faces, voices, words.
Cross-domain EEG-based Emotion Recognition with Contrastive Learning
CV and Pattern Recognition
Reads your feelings from brain waves.