Language Models are Drummers: Drum Composition with Natural Language Pre-TrainingDownload PDF

21 Nov 2022 (modified: 21 Jul 2024)creativeAIReaders: Everyone
Keywords: muisc generation, language model, drum, rhythm, artificial intelligence, deep learning, transformer, transfer learning
TL;DR: Pre-trained language models can learn to generate drum grooves with a small amount of data
Abstract: Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
Submission Type: archival
Presentation Type: onsite
Presenter: Li Zhang
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/language-models-are-drummers-drum-composition/code)
0 Replies

Loading