Let the Model Learn to Feel: Mode-Guided Tonality Injection for Symbolic Music Emotion Recognition
By: Haiying Xia , Zhongyi Huang , Yumei Tan and more
Potential Business Impact:
Helps computers understand music's feelings better.
Music emotion recognition is a key task in symbolic music understanding (SMER). Recent approaches have shown promising results by fine-tuning large-scale pre-trained models (e.g., MIDIBERT, a benchmark in symbolic music understanding) to map musical semantics to emotional labels. While these models effectively capture distributional musical semantics, they often overlook tonal structures, particularly musical modes, which play a critical role in emotional perception according to music psychology. In this paper, we investigate the representational capacity of MIDIBERT and identify its limitations in capturing mode-emotion associations. To address this issue, we propose a Mode-Guided Enhancement (MoGE) strategy that incorporates psychological insights on mode into the model. Specifically, we first conduct a mode augmentation analysis, which reveals that MIDIBERT fails to effectively encode emotion-mode correlations. We then identify the least emotion-relevant layer within MIDIBERT and introduce a Mode-guided Feature-wise linear modulation injection (MoFi) framework to inject explicit mode features, thereby enhancing the model's capability in emotional representation and inference. Extensive experiments on the EMOPIA and VGMIDI datasets demonstrate that our mode injection strategy significantly improves SMER performance, achieving accuracies of 75.2% and 59.1%, respectively. These results validate the effectiveness of mode-guided modeling in symbolic music emotion recognition.
Similar Papers
Learning What to Attend First: Modality-Importance-Guided Reasoning for Reliable Multimodal Emotion Understanding
Artificial Intelligence
Helps AI understand feelings better from pictures and words.
Multi Agents Semantic Emotion Aligned Music to Image Generation with Music Derived Captions
Multimedia
Makes music create pictures that feel the same.
Controllable Embedding Transformation for Mood-Guided Music Retrieval
Sound
Changes song mood without changing its style.