Score: 0

Factual and Musical Evaluation Metrics for Music Language Models

Published: November 2, 2025 | arXiv ID: 2511.05550v1

By: Daniel Chenyu Lin, Michael Freeman, John Thickstun

Potential Business Impact:

Tests if music AI answers questions correctly.

Business Areas:
Music Education Education, Media and Entertainment, Music and Audio

Music language models (Music LMs), like vision language models, leverage multimodal representations to answer natural language queries about musical audio recordings. Although Music LMs are reportedly improving, we find that current evaluations fail to capture whether their answers are correct. Specifically, for all Music LMs that we examine, widely-used evaluation metrics such as BLEU, METEOR, and BERTScore fail to measure anything beyond linguistic fluency of the model's responses. To measure the true performance of Music LMs, we propose (1) a better general-purpose evaluation metric for Music LMs adapted to the music domain and (2) a factual evaluation framework to quantify the correctness of a Music LM's responses. Our framework is agnostic to the modality of the question-answering model and could be generalized to quantify performance in other open-ended question-answering domains. We use open datasets in our experiments and will release all code on publication.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Sound