A Survey on Multimodal Music Emotion Recognition
By: Rashini Liyanarachchi, Aditya Joshi, Erik Meijering
Potential Business Impact:
Helps computers understand music's feelings.
Multimodal music emotion recognition (MMER) is an emerging discipline in music information retrieval that has experienced a surge in interest in recent years. This survey provides a comprehensive overview of the current state-of-the-art in MMER. Discussing the different approaches and techniques used in this field, the paper introduces a four-stage MMER framework, including multimodal data selection, feature extraction, feature processing, and final emotion prediction. The survey further reveals significant advancements in deep learning methods and the increasing importance of feature fusion techniques. Despite these advancements, challenges such as the need for large annotated datasets, datasets with more modalities, and real-time processing capabilities remain. This paper also contributes to the field by identifying critical gaps in current research and suggesting potential directions for future research. The gaps underscore the importance of developing robust, scalable, a interpretable models for MMER, with implications for applications in music recommendation systems, therapeutic tools, and entertainment.
Similar Papers
A Study on the Data Distribution Gap in Music Emotion Recognition
Sound
Helps computers understand music's feelings better.
Leveraging Label Potential for Enhanced Multimodal Emotion Recognition
Sound
Helps computers understand feelings better by using labels.
ECMF: Enhanced Cross-Modal Fusion for Multimodal Emotion Recognition in MER-SEMI Challenge
CV and Pattern Recognition
Helps computers understand your feelings from faces, voices, words.