Keywords: melody generation, DCGAN, dilated convolutions
TL;DR: Representing melodies as images with semantic units aligned we can generate them using a DCGAN without any recurrent components.
Abstract: In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components. Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel.
Code: https://github.com/gan-music-generation/gan_music_generation
Original Pdf: pdf
7 Replies
Loading