DeepEmoNet: Building Machine Learning Models for Automatic Emotion Recognition in Human Speeches
By: Tai Vu
Potential Business Impact:
Helps computers understand feelings in voices.
Speech emotion recognition (SER) has been a challenging problem in spoken language processing research, because it is unclear how human emotions are connected to various components of sounds such as pitch, loudness, and energy. This paper aims to tackle this problem using machine learning. Particularly, we built several machine learning models using SVMs, LTSMs, and CNNs to classify emotions in human speeches. In addition, by leveraging transfer learning and data augmentation, we efficiently trained our models to attain decent performances on a relatively small dataset. Our best model was a ResNet34 network, which achieved an accuracy of $66.7\%$ and an F1 score of $0.631$.
Similar Papers
Amplifying Emotional Signals: Data-Efficient Deep Learning for Robust Speech Emotion Recognition
Audio and Speech Processing
Helps computers understand your feelings from your voice.
EmoAugNet: A Signal-Augmented Hybrid CNN-LSTM Framework for Speech Emotion Recognition
Sound
Helps computers understand how you feel when you talk.
EmoHRNet: High-Resolution Neural Network Based Speech Emotion Recognition
Sound
Helps computers understand how you feel from your voice.