Iterative GANs for Rotating Visual ObjectsDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: We are interested in learning visual representations which allow for 3D manipulations of visual objects based on a single 2D image. We cast this into an image-to-image transformation task, and propose Iterative Generative Adversarial Networks (IterGANs) to learn a visual representation that can be used for objects seen in training, but also for never seen objects. Since object manipulation requires a full understanding of the geometry and appearance of the object, our IterGANs learn an implicit 3D model and a full appearance model of the object, which are both inferred from a single (test) image. Moreover, the intermediate generated images from IterGANs can be used by additional loss functions to increase the quality of all generated images without the need for additional supervision. Experiments on rotated objects show how iterGANs help with the generation process.
TL;DR: IterGANs use iterative generators to rotate visual objects. The intermediate images allow for adding additional loss functions.
6 Replies

Loading