Keywords: music language models, music captioning, music question-answering, evaluations, benchmarking
TL;DR: We propose a better general-purpose evaluation metric for Music LMs adapted to the music domain and and a factual evaluation framework to quantify the correctness of a Music LM's responses.
Abstract: Music language models (Music LMs), like vision language models, leverage multimodal representations to answer natural language queries about musical audio recordings. Although Music LMs are reportedly improving, we find that current evaluations fail to capture whether their answers are correct. Specifically, for all Music LMs that we examine, widely-used evaluation metrics such as BLEU, METEOR, and BERTScore fail to measure anything beyond linguistic fluency of the model's responses. To measure the true performance of Music LMs, we propose (1) a better general-purpose evaluation metric for Music LMs adapted to the music domain and (2) a factual evaluation framework to quantify the correctness of a Music LM's responses. Our framework is agnostic to the modality of the question-answering model and could be generalized to quantify performance in other open-ended question-answering domains. We use open datasets in our experiments and will release all code on publication.
Supplementary Material: pdf
Primary Area: datasets and benchmarks
Submission Number: 21316
Loading