Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation TaskDownload PDF

21 Nov 2022 (modified: 25 Nov 2024)creativeAIReaders: Everyone
Keywords: text-to-music, pre-trained checkpoint, language-music model, conditional music generation
TL;DR: In this paper, we carry out the study of language-music models trained on large-scale text-music data. We analyse the capabilities and limitations of our model to better understand the potential of language-music models.
Abstract: Benefiting from large-scale datasets and pre-trained models, the field of generative models has recently gained significant momentum. However, most datasets for symbolic music are very small, which potentially limits the performance of data-driven multimodal models. An intuitive solution to this problem is to leverage pre-trained models from other modalities (e.g., natural language) to improve the performance of symbolic music-related multimodal tasks. In this paper, we carry out the first study of generating complete and semantically consistent symbolic music scores from text descriptions, and explore the efficacy of using publicly available checkpoints (i.e., BERT, GPT-2, and BART) for natural language processing in the task of text-to-music generation. Our experimental results show that the improvement from using pre-trained checkpoints is statistically significant in terms of BLEU score and edit distance similarity. We analyse the capabilities and limitations of our model to better understand the potential of language-music models.
Submission Type: archival
Presentation Type: online
Presenter: Shangda Wu
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/exploring-the-efficacy-of-pre-trained/code)
0 Replies

Loading