Score: 1

A Study on the Data Distribution Gap in Music Emotion Recognition

Published: October 6, 2025 | arXiv ID: 2510.04688v1

By: Joann Ching, Gerhard Widmer

Potential Business Impact:

Helps computers understand music's feelings better.

Business Areas:
Music Education Education, Media and Entertainment, Music and Audio

Music Emotion Recognition (MER) is a task deeply connected to human perception, relying heavily on subjective annotations collected from contributors. Prior studies tend to focus on specific musical styles rather than incorporating a diverse range of genres, such as rock and classical, within a single framework. In this paper, we address the task of recognizing emotion from audio content by investigating five datasets with dimensional emotion annotations -- EmoMusic, DEAM, PMEmo, WTC, and WCMED -- which span various musical styles. We demonstrate the problem of out-of-distribution generalization in a systematic experiment. By closely looking at multiple data and feature sets, we provide insight into genre-emotion relationships in existing data and examine potential genre dominance and dataset biases in certain feature representations. Based on these experiments, we arrive at a simple yet effective framework that combines embeddings extracted from the Jukebox model with chroma features and demonstrate how, alongside a combination of several diverse training sets, this permits us to train models with substantially improved cross-dataset generalization capabilities.


Page Count
15 pages

Category
Computer Science:
Sound