MVRS: The Multimodal Virtual Reality Stimuli-based Emotion Recognition Dataset
By: Seyed Muhammad Hossein Mousavi, Atiye Ilanloo
Potential Business Impact:
Helps computers understand feelings from faces and bodies.
Automatic emotion recognition has become increasingly important with the rise of AI, especially in fields like healthcare, education, and automotive systems. However, there is a lack of multimodal datasets, particularly involving body motion and physiological signals, which limits progress in the field. To address this, the MVRS dataset is introduced, featuring synchronized recordings from 13 participants aged 12 to 60 exposed to VR based emotional stimuli (relaxation, fear, stress, sadness, joy). Data were collected using eye tracking (via webcam in a VR headset), body motion (Kinect v2), and EMG and GSR signals (Arduino UNO), all timestamp aligned. Participants followed a unified protocol with consent and questionnaires. Features from each modality were extracted, fused using early and late fusion techniques, and evaluated with classifiers to confirm the datasets quality and emotion separability, making MVRS a valuable contribution to multimodal affective computing.
Similar Papers
EVA-MED: An Enhanced Valence-Arousal Multimodal Emotion Dataset for Emotion Recognition
Human-Computer Interaction
Helps computers understand your feelings better.
Realtime Multimodal Emotion Estimation using Behavioral and Neurophysiological Data
Human-Computer Interaction
Helps people understand feelings by reading body signals.
MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
CV and Pattern Recognition
Helps computers understand videos by watching and listening.