Score: 1

A Survey on Multimodal Music Emotion Recognition

Published: April 26, 2025 | arXiv ID: 2504.18799v1

By: Rashini Liyanarachchi, Aditya Joshi, Erik Meijering

Potential Business Impact:

Helps computers understand music's feelings.

Business Areas:
Music Education Education, Media and Entertainment, Music and Audio

Multimodal music emotion recognition (MMER) is an emerging discipline in music information retrieval that has experienced a surge in interest in recent years. This survey provides a comprehensive overview of the current state-of-the-art in MMER. Discussing the different approaches and techniques used in this field, the paper introduces a four-stage MMER framework, including multimodal data selection, feature extraction, feature processing, and final emotion prediction. The survey further reveals significant advancements in deep learning methods and the increasing importance of feature fusion techniques. Despite these advancements, challenges such as the need for large annotated datasets, datasets with more modalities, and real-time processing capabilities remain. This paper also contributes to the field by identifying critical gaps in current research and suggesting potential directions for future research. The gaps underscore the importance of developing robust, scalable, a interpretable models for MMER, with implications for applications in music recommendation systems, therapeutic tools, and entertainment.

Country of Origin
🇦🇺 Australia

Page Count
26 pages

Category
Computer Science:
Multimedia