Musical Score Understanding Benchmark: Evaluating Large Language Models' Comprehension of Complete Musical Scores
By: Congren Dai , Yue Yang , Krinos Li and more
Potential Business Impact:
Helps computers understand music scores like a human.
Understanding complete musical scores requires reasoning over symbolic structures such as pitch, rhythm, harmony, and form. Despite the rapid progress of Large Language Models (LLMs) and Vision-Language Models (VLMs) in natural language and multimodal tasks, their ability to comprehend musical notation remains underexplored. We introduce Musical Score Understanding Benchmark (MSU-Bench), the first large-scale, human-curated benchmark for evaluating score-level musical understanding across both textual (ABC notation) and visual (PDF) modalities. MSU-Bench comprises 1,800 generative question-answer (QA) pairs drawn from works spanning Bach, Beethoven, Chopin, Debussy, and others, organised into four progressive levels of comprehension: Onset Information, Notation & Note, Chord & Harmony, and Texture & Form. Through extensive zero-shot and fine-tuned evaluations of over 15+ state-of-the-art (SOTA) models, we reveal sharp modality gaps, fragile level-wise success rates, and the difficulty of sustaining multilevel correctness. Fine-tuning markedly improves performance in both modalities while preserving general knowledge, establishing MSU-Bench as a rigorous foundation for future research at the intersection of Artificial Intelligence (AI), musicological, and multimodal reasoning.
Similar Papers
WildScore: Benchmarking MLLMs in-the-Wild Symbolic Music Reasoning
Sound
Helps computers understand and analyze real music.
ABC-Eval: Benchmarking Large Language Models on Symbolic Music Understanding and Instruction Following
Sound
Teaches computers to understand and follow music notes.
Evaluating Multimodal Large Language Models on Core Music Perception Tasks
Sound
Computers can't truly hear music, only read notes.