Learning a 3D-Aware Encoder for Style-based Generative Radiance FieldDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: We tackle the task of GAN inversion for 3D generative radiance field, (e.g., StyleNeRF). In the inversion task, we aim to learn an inversion function to project an input image to the latent space of a generator and then synthesize novel views of the original image based on the latent code. Compared with GAN inversion for 2D generative models, 3D inversion not only needs to 1) preserve the identity of the input image, but also 2) ensure 3D consistency in generated novel views. This requires the latent code obtained from the single view image to be invariant across multiple views. To address this new challenge, we propose a two-stage encoder for 3D generative NeRF inversion. In the first stage, we introduce a base encoder that converts the input image to a latent code. To ensure the latent code can be used to synthesize identity preserving and 3D consistent novel view images, we utilize identity contrastive learning to train the base encoder. Since collecting real-world multi-view images of the same identity is expensive, we leverage multi-view images synthesized by the generator itself for contrastive learning. Second, to better preserve the identity of the input image, we introduce a residual encoder to refine the latent code and add finer details to the output image. Through extensive experiments, we demonstrate that our proposed two-stage encoder qualitatively and quantitatively exhibits superiority over the existing encoders for GAN inver- sion in both image reconstruction and novel-view rendering.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
8 Replies

Loading