Reproduction of GANSpaceDownload PDF

Published: 11 Apr 2022, Last Modified: 05 May 2023RC2021Readers: Everyone
Keywords: GAN, PCA, Interpretable Controls
Abstract: Scope of Reproducibility The authors introduce a novel approach to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image manipulation and synthesis. This is done by identifying important latent directions based on Principal Component Analysis (PCA) applied either in the latent space or the feature space. We aim to validate the claims and reproduce the results in the original paper. Methodology The code that was provided by the authors in Pytorch was reimplemented in Tensorflow 1.x for the pretrained StyleGAN and StyleGAN2 architectures. This was done with the help of the APIs provided by the original authors of these models. The experiments were run on an Intel i7 processor containing 16 GB of RAM, coupled with an Nvidia 1060 GPU having 6 GB of VRAM. Results We were able to reproduce the results and verify the claims made by the authors for the StyleGAN and StyleGAN2 models by recreating the modified images, given the seed and other configuration parameters. Additionally, we also perform our own experiments to identify new edits and show that edits are transferable across similar datasets using the techniques proposed by the authors. What was easy The paper provides detailed explanations for the different mathematical concepts that were involved in the proposed method. This, augmented with a well-structured and documented code repository, allowed us to understand the major ideas in a relatively short period of time. Running the experiments using the original codebase was straightforward and highly efficient as well, as the authors have taken additional steps to employ batch processing wherever possible. What was difficult Originally we were attempting to recreate identical images with zero delta in the RGB values. However, due to differences in the random number generators between PyTorch-CPU, PyTorch-GPU and Numpy, the random values were not the same even with the same seed. This resulted in minute differences in the background artifacts of the generated images. Additionally, there is a lack of open source Tensorflow 1.x APIs to access the intermediate layers of the BigGAN model. Due to time constraints, we were unable to implement these accessors and verify the images that the authors of GANSpace created using BigGAN. Communication with original authors While conducting our experiments, we did not contact the original authors. The paper and codebase were organized well and aided us in effectively reproducing and validating the authors' claims.
Paper Url: https://openreview.net/forum?id=WCRIASdNpsR&noteId=geUtL2j1uoH
Supplementary Material: zip
5 Replies

Loading