Abstract: Since their introduction, Generative Adversarial Networks (GANs) have represented the bulk of approaches used in image generation. Before GANs, such approaches used Machine Learning (ML) exclusively to tackle the training problems inherent to GANs. However, in recent years, evolutionary approaches have been making a comeback, not only across the field of ML but in generative modelling specifically. Successes in GPU-accelerated Genetic Programming (GP) led to the introduction of the TGPGAN framework, which used GP as a replacement for the deep convolutional network conventionally used as a GAN generator. In this paper, we delve further into the generative capabilities of evolutionary computation within adversarial models and extend the study performed in TGPGAN to analyse other evolutionary approaches. Similarly to TGPGAN, the presented approaches replace the generator component of a Deep Convolutional GAN (DCGAN): one with a line-drawing Genetic Algorithm (GA) and another with a Compositional Pattern Producing Network (CPPN). Our comparison of generative performance shows that the GA used manages to perform competitively with the original framework. More importantly, this work showcases the viability of other evolutionary approaches other than GP for the purpose of image generation.
Loading