Parallel Recurrent Data Augmentation for GAN training with Limited and Diverse DataDownload PDF

25 Mar 2019 (modified: 05 May 2023)Submitted to LLD 2019Readers: Everyone
Keywords: GAN training, Data Augmentation
TL;DR: We introduced a novel, simple, and efficient data augmentation method that boosts the performances of existing GANs when training data is limited and diverse.
Abstract: The need for large amounts of training image data with clearly defined features is a major obstacle to applying generative adversarial networks(GAN) on image generation where training data is limited but diverse, since insufficient latent feature representation in the already scarce data often leads to instability and mode collapse during GAN training. To overcome the hurdle of limited data when applying GAN to limited datasets, we propose in this paper the strategy of \textit{parallel recurrent data augmentation}, where the GAN model progressively enriches its training set with sample images constructed from GANs trained in parallel at consecutive training epochs. Experiments on a variety of small yet diverse datasets demonstrate that our method, with little model-specific considerations, produces images of better quality as compared to the images generated without such strategy. The source code and generated images of this paper will be made public after review.
3 Replies

Loading