Towards Mixed-initiative generation of multi-channel sequential structureDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: We argue for the benefit of designing deep generative models through a mixed-initiative, co-creative combination of deep learning algorithms and human specifications, focusing on multi-channel music composition. Sequence models have shown convincing results in domains such as summarization and translation; however, longer-term structure remains a major challenge. Given lengthy inputs and outputs, deep generative systems still lack reliable representations of beginnings, middles, and ends, which are standard aspects of creating content in domains such as music composition. This paper aims to contribute a framework for mixed-initiative generation approaches that let humans both supply and control some of these aspects in deep generative models for music, and present a case study of Counterpoint by Convolutional Neural Network (CoCoNet).
TL;DR: We argue for the benefit of designing deep generative models through a mixed-initiative, co-creative combination of deep learning algorithms and human specifications, focusing on multi-channel music composition.
Keywords: Mixed-initiative, Deep generative models, human-in-the-loop
4 Replies

Loading