Emotion Recognition in Signers
By: Kotaro Funakoshi, Yaoxiong Zhu
Potential Business Impact:
Helps computers understand sign language emotions better.
Recognition of signers' emotions suffers from one theoretical challenge and one practical challenge, namely, the overlap between grammatical and affective facial expressions and the scarcity of data for model training. This paper addresses these two challenges in a cross-lingual setting using our eJSL dataset, a new benchmark dataset for emotion recognition in Japanese Sign Language signers, and BOBSL, a large British Sign Language dataset with subtitles. In eJSL, two signers expressed 78 distinct utterances with each of seven different emotional states, resulting in 1,092 video clips. We empirically demonstrate that 1) textual emotion recognition in spoken language mitigates data scarcity in sign language, 2) temporal segment selection has a significant impact, and 3) incorporating hand motion enhances emotion recognition in signers. Finally we establish a stronger baseline than spoken language LLMs.
Similar Papers
Perspectives on Capturing Emotional Expressiveness in Sign Language
Human-Computer Interaction
Helps computers understand feelings in sign language.
Challenges and opportunities in portraying emotion in generated sign language
Computation and Language
Makes computer sign language avatars show feelings.
EASL: Multi-Emotion Guided Semantic Disentanglement for Expressive Sign Language Generation
CV and Pattern Recognition
Makes sign language videos show feelings.