Anticipatory Music Transformer

Published: 19 Apr 2024, Last Modified: 19 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We introduce anticipation: a method for constructing a controllable generative model of a temporal point process (the event process) conditioned asynchronously on realizations of a second, correlated process (the control process). We achieve this by interleaving sequences of events and controls, such that controls appear following stopping times in the event sequence. This work is motivated by problems arising in the control of symbolic music generation. We focus on infilling control tasks, whereby the controls are a subset of the events themselves, and conditional generation completes a sequence of events given the fixed control events. We train anticipatory infilling models using the large and diverse Lakh MIDI music dataset. These models match the performance of autoregressive models for prompted generation, with the additional capability to perform infilling control tasks, including accompaniment. Human evaluators report that an anticipatory model produces accompaniments with similar musicality to even music composed by humans over a 20-second clip.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/jthickstun/anticipation/
Assigned Action Editor: ~Brian_Kulis1
Submission Number: 1755
Loading