Generation of Character Illustrations from Stick Figures Using a Modification of Generative Adversarial NetworkDownload PDFOpen Website

2018 (modified: 29 Oct 2021)COMPSAC (2) 2018Readers: Everyone
Abstract: We propose a modification of generative adversarial networks (GANs) that generate illustrations of human figures from given poses represented by stick figures. In recent years, while various methods that generate images of characters using GANs have been proposed, it is not yet possible for users to freely designate poses of human figures. When generating an image of a character, the pose of the character takes is an important component of its composition. Thus it is necessary fora user who wants to create an illustration to be able to specify the pose easily. We collected a set of illustrations of human figures from the internet, and for each illustration, a simple line drawing that specifies the pose was drawn manually. We constructed a GAN that takes a line drawing as its input and creates an illustration of a person in a pose that matches the line drawing. These networks are learned using the data set we prepared. In this paper, we propose a new network architecture. After constructing two networks both of which have almost the same structure as pix2pix, which is a variant model of GANs, we stack up those networks based on the idea of stack GAN. The experimental results show that, from stick figures representing common poses such as a standing pose, our methods was able to successfully generate images of characters. However, in the case of stick figures having rare poses that were not in the dataset, such as figures raising a hand or lying down, the generated images were blurred and not of a high-quality but still had the desired shapes. By expanding the dataset to include various poses, it is possible to generate diverse poses more precisely.
0 Replies

Loading