Do Foundational Audio Encoders Understand Music Structure?
By: Keisuke Toyama , Zhi Zhong , Akira Takahashi and more
In music information retrieval (MIR) research, the use of pretrained foundational audio encoders (FAEs) has recently become a trend. FAEs pretrained on large amounts of music and audio data have been shown to improve performance on MIR tasks such as music tagging and automatic music transcription. However, their use for music structure analysis (MSA) remains underexplored. Although many open-source FAE models are available, only a small subset has been examined for MSA, and the impact of factors such as learning methods, training data, and model context length on MSA performance remains unclear. In this study, we conduct comprehensive experiments on 11 types of FAEs to investigate how these factors affect MSA performance. Our results demonstrate that FAEs using selfsupervised learning with masked language modeling on music data are particularly effective for MSA. These findings pave the way for future research in MSA.
Similar Papers
Temporal Adaptation of Pre-trained Foundation Models for Music Structure Analysis
Sound
Helps computers understand song parts faster.
Sparse Autoencoders Make Audio Foundation Models more Explainable
Sound
Unlocks secrets in sound computer models.
Can Masked Autoencoders Also Listen to Birds?
Machine Learning (CS)
Teaches computers to identify bird songs better.