Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Learning a face space for experiments on human identity
Nov 03, 2017 (modified: Nov 03, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Generative models of human identity and appearance have broad applicability to behavioral science and technology, but the exquisite sensitivity of human face perception means that their utility hinges on alignment of the latent representation to human psychological representations and the photorealism of the generated images. Meeting these requirements is an exacting task, and existing models of human identity and appearance are often unworkably abstract, artificial, uncanny, or heavily biased. Here, we use a variational autoencoder with an autoregressive decoder to learn a latent face space from a uniquely diverse dataset of portraits that control much of the variation irrelevant to human identity and appearance. Our method generates photorealistic portraits of fictive identities with a smooth, navigable latent space. We validate our model's alignment with human sensitivities by introducing a psychophysical Turing test for images, which humans mostly fail, a rare occurrence with any interesting generative image model. Lastly, we demonstrate an initial application of our model to the problem of fast search in mental space to obtain detailed police sketches in a small number of trials.
TL;DR:Learning generative models for faces with realistic sample quality, useful for human experiments