Factual and Musical Evaluation Metrics for Music Language Models
By: Daniel Chenyu Lin, Michael Freeman, John Thickstun
Potential Business Impact:
Tests if music AI answers questions correctly.
Music language models (Music LMs), like vision language models, leverage multimodal representations to answer natural language queries about musical audio recordings. Although Music LMs are reportedly improving, we find that current evaluations fail to capture whether their answers are correct. Specifically, for all Music LMs that we examine, widely-used evaluation metrics such as BLEU, METEOR, and BERTScore fail to measure anything beyond linguistic fluency of the model's responses. To measure the true performance of Music LMs, we propose (1) a better general-purpose evaluation metric for Music LMs adapted to the music domain and (2) a factual evaluation framework to quantify the correctness of a Music LM's responses. Our framework is agnostic to the modality of the question-answering model and could be generalized to quantify performance in other open-ended question-answering domains. We use open datasets in our experiments and will release all code on publication.
Similar Papers
A Survey on Evaluation Metrics for Music Generation
Sound
Helps judge if computer-made music sounds good.
Musical Score Understanding Benchmark: Evaluating Large Language Models' Comprehension of Complete Musical Scores
Sound
Helps computers understand music scores like a human.
Objective Metrics for Evaluating Large Language Models Using External Data Sources
Computation and Language
Tests computer smarts fairly and without bias.