Abstract: This paper presents a novel system for performing the task of image to image translation using Generative Adversarial Networks(GANs) while having very few training parameters and still maintaining acceptable results. The system learns the loss function by itself depending upon the translation task. Reduction in the number of parameters is achieved by modifying and using Fire modules like the ones used in the SqueezeNet architecture. This reduction results in a decrease in the training time of the model and also reducing its size, thus, making it feasible for deploying on hardware devices with limited memory. Results of a few datasets have been displayed and the model has also been evaluated for the task of automatic image colorization by means of a “colorization Turing test” where human participants were asked to rate the images(both generated and real) on the basis of its “realness”. A new method has been proposed for the quantitative evaluation of these results which, we believe, is more insightful than the previously used methods. The analysis showed that 45% of the people were fooled by our model and people were fooled for 53% of images, both of which are significantly higher than the current state-of-the-art results.
0 Replies
Loading