Abstract: The necessity to use very large datasets in order to train Generative Adversarial Networks (GANs) has limited their use in cases where the data at disposal are scarce or poorly labelled (e.g., in real life applications). Recently, meta-learning proved that it can help solving effectively few-shot classification problems, but its use in noise-to-image generation was only partially explored. In this paper, we took the first step into applying a meta-learning algorithm (Reptile), to the discriminator of a GAN and to a mapping network in order to optimize the random noise z to guide the generator network into producing images belonging to specific classes. By doing so, we prove that the latent space distribution is crucial for the generation of sharp samples when few training data are at disposal and also managed to generate samples of previously unseen classes just by optimizing the latent space without changing any parameter in the generator network. Finally, we show several experiments with two widely used datasets: MNIST and Omniglot.
0 Replies
Loading