Temporal Adaptation of Pre-trained Foundation Models for Music Structure Analysis
By: Yixiao Zhang , Haonan Chen , Ju-Chiang Wang and more
Potential Business Impact:
Helps computers understand song parts faster.
Audio-based music structure analysis (MSA) is an essential task in Music Information Retrieval that remains challenging due to the complexity and variability of musical form. Recent advances highlight the potential of fine-tuning pre-trained music foundation models for MSA tasks. However, these models are typically trained with high temporal feature resolution and short audio windows, which limits their efficiency and introduces bias when applied to long-form audio. This paper presents a temporal adaptation approach for fine-tuning music foundation models tailored to MSA. Our method enables efficient analysis of full-length songs in a single forward pass by incorporating two key strategies: (1) audio window extension and (2) low-resolution adaptation. Experiments on the Harmonix Set and RWC-Pop datasets show that our method significantly improves both boundary detection and structural function prediction, while maintaining comparable memory usage and inference speed.
Similar Papers
Structures Meet Semantics: Multimodal Fusion via Graph Contrastive Learning
CV and Pattern Recognition
Helps computers understand feelings from voice, face, and words.
Sound and Music Biases in Deep Music Transcription Models: A Systematic Analysis
Sound
Helps computers understand music better, not just piano.
PSA-MF: Personality-Sentiment Aligned Multi-Level Fusion for Multimodal Sentiment Analysis
Multimedia
Helps computers understand feelings from faces, voices, words.