Learning Non-Linear Disentangled Editing For StyleganDownload PDFOpen Website

2021 (modified: 02 Nov 2022)ICIP 2021Readers: Everyone
Abstract: Recent work has demonstrated the great potential of image editing in the latent space of powerful deep generative models such as StyleGAN. However, the success of such methods relies on the assumption that a linear hyperplane may separate the latent space into two subspaces for a binary attribute. In this work, we show that this hypothesis is a significant limitation and propose to learn a non-linear, regularized and identity-preserving latent space transformation that leads to more accurate and disentangled manipulations of facial attributes.
0 Replies

Loading