Decoding Micromotion in Low-dimensional Latent Spaces from StyleGAN

Published: 20 Nov 2023, Last Modified: 02 Dec 2023CPAL 2024 (Proceedings Track) OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: generative model, low-rank decomposition
TL;DR: We show that in StyleGAN's latent space, we can consistently find low-dimensional latent subspaces where universal editing directions can be reconstructed for many meaningful changes (denoted as ``micromotions'').
Abstract: The disentanglement of StyleGAN latent space has paved the way for realistic and controllable image editing, but does StyleGAN know anything about temporal motion, as it was only trained on static images? To study the motion features in the latent space of StyleGAN, in this paper, we hypothesize and demonstrate that a series of meaningful, natural, and versatile small, local movements (referred to as "micromotion", such as expression, head movement, and aging effect) can be represented in low-rank spaces extracted from the latent space of a conventionally pre-trained StyleGAN-v2 model for face generation, with the guidance of proper "anchors" in the form of either short text or video clips. Starting from one target face image, with the editing direction decoded from the low-rank space, its micromotion features can be represented as simple as an affine transformation over its latent feature. Perhaps more surprisingly, such micromotion subspace, even learned from just single target face, can be painlessly transferred to other unseen face images, even those from vastly different domains (such as oil painting, cartoon, and sculpture faces). It demonstrates that the local feature geometry corresponding to one type of micromotion is aligned across different face subjects, and hence that StyleGAN-v2 is indeed ``secretly'' aware of the subject-disentangled feature variations caused by that micromotion. As an application, we present various successful examples of applying our low-dimensional micromotion subspace technique to directly and effortlessly manipulate faces. Compared with previous editing methods, our framework shows high robustness, low computational overhead, and impressive domain transferability. Our code is publicly available at
Track Confirmation: Yes, I am submitting to the proceeding track.
Submission Number: 31